General discussion

  • Creator
    Topic
  • #2180167

    Critical Thinking

    Locked

    by justin james ·

    blog root

All Comments

  • Author
    Replies
    • #3117655

      Intel Macs, what’s the big deal?

      by justin james ·

      In reply to Critical Thinking

      Maybe I’m just being cynical, but I fail to see why Apple’s decision to move to the Intel architecture is such a big deal. Let’s get real here folks, it’s simply a change in CPUs.

      OK, so maybe there’s more to it. So the Macs may drop a touch in price, and/or get a bit faster. Personally, I’ve been wanting to get a Mac for a while, ever since I put together a server on BSD. I loved BSD’s reliability and speed, especially compared to the Windows 2003 Enterprise server that it replaced. Put simply, BSD rocks, and the idea of using a computer built on BSD, plus a usable GUI (call me crazy, but I think X Windows, at least in every X environment I’ve worked in, is total garbage to deal with, just a fancy way of handling multiple shell sessions) would be really nice. When Apple announced the Mac mini, I was extremely happy, and started saving my pennies. Sure, I could get a pretty decent PC for the same money, but I’d still have the same PC problems. And frankly, the perceived lack of software for Macs doesn’t bother me too much, because my home PC doesn’t do anything that I can’t do on a Mac or a BSD system. So I was already prepared to take the Mac plunge. I’ve even downloaded installers for all of my common software, just wanting to FTP it over the moment I get it plugged in.

      Am I holding off on getting that Mac mini because I want Intel. Heck no. I’m holding off for the same reason my home PC is an Athlon 1900+ with 256 MB RAM: I have higher financial priorities at the moment. But I’m looking forwards to it.

      When Apple announced that they would be switching to Intel chips, everyone made such a big deal about it. I didn’t see the cause for the hoopla then, no do I see it now. OK, maybe IBM didn’t have the roadmap for the G5 that Apple wanted, and maybe they were having some bad karma with Motorola. But at the end of the day, none of my Mac using friends ever complained about having a slow Mac, they are all delighted with it. The current G5 chips are good enough for now, and moving to Intel is a simple business decision to keep the future as bright as the present.

      So I started asking around, trying to find out what the big deal is. My Mac friends could not care less. As long as they’re using Mac OSX, they don’t care if there are ferrets running inside the box handing delivering piece of paper with ones and zero written on them to each other. They just love the OS. My PC friends are all delighted because they have these grandiose dreams of dual booting.

      Sorry folks, I’ve been down the dual boot route. NT4 & Windows 95. OS/2 Warp and Windows 3.1. Windows 98 & BeOS (yes, I tried BeOS, loved it to death, no applications or drivers, sad to say). Indeed, BeOS was originally designed for the PowerPC, then ported over to x86 when Be couldn’t sell any of their boxes. I did Windows 98 and Windows 2000 for a while too. But at the end of the day, I always despised dual booting. Life is always more miserable when you dual boot. Many advantages of each OS are tied to the file system. NTFS is the cornerstone of NT/XP’s security system. HPFS was integral to OS/2 Warp. And I’m sure that HFS+ plays a large role in OSX’s capabilities. Yes, I’m aware that many of the OS’s I’ve listed can read NTFS. But they can’t write to NTFS. No Microsoft OS reads or writes to anything other than FAT16/32 and NTFS (actually, NT 4 may have been able to handle HPFS if memory serves). The point is, you’re going to have a situation where you’re going to end up sticking a giant FAT32 partition somewhere in your system for your common data files, plus have two more system partitions, one for NTFS and one for HFS+. And to be honest, I 100% hate that idea. I jump through hoops to only have one volume mounted in my system, I don’t like dealing with drive letters (I know that MacOSX doesn’t use drive letters). It’s a pain in the rear to have to figure out which directory a file goes into, not just based upon its contents, but also upon which directory or volume or whatever has space remaining.

      Plus, dual booting is a huge waste of time and interupts my workflow. People buy faster computer parts because they don’t want to wait thirty seconds to two minutes starting an application. But with dual booting, this is exactly what you’ll be doing. Need an application that runs on the OS you’re not currently working with? Well, you get to stop EVERYTHING you are doing and reboot. Heaven help you if you miss the boot loader and end up in the wrong OS. Furthermore, isn’t one of the reasons why we like our newer operating systems is because we have to boot less often? Every new version of Windows certainly advertises this as a selling point. No one enjoys having to drop everything because something (or a crash) requires a reboot.

      OK, now there’s the hypersior option (Microsoft’s Virtual PC, Xen, VMWare, etc.). Call me silly, call me crazy, but a modern OS sucks up a good amount of RAM and CPU time, even the more efficient ones like the *Nix’s. Unless all of the OS’s you’re running are using microkernels, or aren’t doing much of anything, you can count on having to double your RAM and increase your CPU requirement by at least 25% in order to have two OS’s running on the same hardware simultaneously. And if you have one OS running CPU intensive activities on a regular basis while you work on the other OS, you should go ahead and increase that CPU power by 50% to 100%. Well, what if you are already running a top-of-the-line CPU, or close to it? I guess you’ll just have to suffer performance way below what a single OS would deliver on the same hardware. Not to mention hard drives. Unless you want to have two hard drives (in a hypervisor situation, where each OS has its own volume, and a shared volume for common data, I’d reccomend three drives), each OS is going to be trying to read and write from opposite ends of the disk simultaneously. Hooray. It’ll be like deliberately making myself use tons of swap file space in terms of speed. And we still haven’t overcome the file system issue. I hope you enjoy having all of your data readable by anyone who gets access to your system, because all of that shared data will be on a FAT32 partition. Not to mention that FAT32 is just about the least efficient file system out there, unless you count FAT16. Not to mention the lack of nifty features like NTFS’s built in compression and encryption, or HFS+’s metadata support (a big selling support for MacOSX), journaling, hard and symbolic links, etc. Or alternatively, you could just have everything in NTFS, and the MacOSX system wouldn’t be able to write to it. Just what I’ve always wanted, a 250 GB CD-ROM disk. My other option (probably the best one, sad to say) is to have the common file system be in a native format for one of the two OSs, and then share it via SMB. Yucky, but at least both OS’s will be able to read and write to the data, and maintain some of its native file system benefits. So let’s add up this hypervisor nonsense. You are almost doubling your hardware power to acheive the same results, the only thing that you’re sharing are the peripherals and optical drives. At that point, doesn’t it almost make sense (except for power consumption) to get a Mac and a PC separate and  a KVM switch from a cost standpoint? Going to the high end of the CPU chain (and the motherboard to support all of this RAM and the fancy CPU) is about as expensive as having the two machines sitting next to each other.

      Even with OSX running natively on Intel hardware, the applications are not running natively on Intel hardware. They are being emulated, via “Rosetta”. I’m not a big fan of hardware emulation, even when it isn’t buggy, it is still slow. Quite frankly, I’d prefer a Mac that is 10% slower on the hardware running native apps, than a Mac running 10% faster emulating another chip for 80% of my apps. That’s just common sense.

      Oh yeah, and there’s one more catch: MacOSX x86 will only run on Apple hardware. I’m sure that there will be XP drivers for this hardware soon enough, that’s not a concern. But do you honestly think that Apple will stop charging the “Apple Tax” just because they’ve switch to Intel? Sure, G5s are more expensive that x86 chips on a pound-for-pound basis, but not nearly by the same ratio that a Mac is more expensive than an equivalent machine. Compare the Mac mini to some of the low priced options from Dell and eMachines/Gateway. The Mac mini costs about 20% more, and comes with less goodies typically. So yes, the price of a Mac will come down, but by what? $50? Maybe $100? It still puts a PowerMac or even an iBook out of the price range of mortal men. It makes the Mac mini, and the eMac slightly more affordable, that’s it.

      But all of my eagerly-waiting pals say, “but I won’t use Apple’s hardware, I’m sure someone will release a ‘patch’ to let me run MacOSX x86 on my existing hardware, and someone will have drivers.” Good luck my friend. First of all, I’m not a big fan of ripping a company off. The profits that Apple makes from their overpriced hardware directly support their continued development of OSX. Deprive Apple of their R&D budget by not buying their hardware, and either the price of OSX goes up (the operating system that charges you for minor version upgrades as it is), or they put less money into developing it. Furthermore, if there is one thing I’m very particular about, it’s stuff like hacking the internals of my operating system and messing with my device drivers. This is the kind of thing that leads to OS instability. I’m not a big fan of OS instability, otherwise I might still be using Windows 98, which would run a heck of a lot faster than XP does for me. This is why I avoid third-party “system tweak tools” like the plague. This is why I don’t let spyware or rootkits on my system. This is why I don’t upgrade my drivers unless I’m actually having a problem, or unless the drivers supports something I desparately need to do. This is why I avoid real-time virus scanning. I don’t do these things because an operating system is under enough stress as it is, without some bonehead messing with its internals. Furthermore, is someone who hacks an operating system up to make it run in violation of its license agreement someone I trust to give me an otherwise clean and unmodified OS? I think not. People downloading warez and MP3s through P2P services like GNUtella and BitTorrent are getting killed by virues, spyware, etc. like the plague. Someone who goes through the effort of cracking an installer could just as easily through something nasty in there for you as well. I would not trust my OS to come from such a source, and neither should you.

      So at the end of the day, where are we? To effectively use Mac OSX on Intel architechture, it won’t be any different than it is today. It won’t be too much faster, if you want to use PC apps you should still have a PC sitting right next to it, and you’ll still be paying through the nose for Apple’s hardware. As excited as I am to get onto OSX as soon as my wallet allows, I don’t see how this gets me there any different or faster.

      As a parting shot, to all of those who were actually surprised that Apple had an x86 iversion in the works, I simply point you to the “Ask the developers” page on Apple’s site (note the date of when this page was put together, 2001). Also take a look at the source code tree. Darwin (the underlying OS) has been available on x86 architecture since Day 1. Sure, the GUI isn’t on here, but the OS itself is half the battle. Microsoft had a PPC version of NT when it was NT 3.1, 3.5, and 3.51 (they also had a version running on SPARC!), and maybe even in NT 4 (memory is weak on that one too). An XBox today runs on a PPC chip; it also runs on what is admitedly a modified version of Windows 2000. Every OS manufacturer out there keeps a seprate port tree for CPUs they don’t support, it’s common practice, and a smart one too. It leaves their options open in a way that’s a lot cheaper than if they suddenly find themselves without a chipset to support anymore. Plus, it gives them leverage with the hardware folks.

      At the end of the day, no matter how I slice and dice it, I simply fail to see why OSX x86 is such a big deal. Yes, Intel chips are better on power usage, a win for the laptop users out there (BTW, has anyone noticed how battery technology lags so far behind power usage?). Yes, Intel has a better roadmap than IBM has for the G5 line. My heart rate might have gone up for a split second if Apple announced that they were witching to AMD64 technology, but they aren’t. There is nothing to be worked up over on this, and this is certainly not a world changing event. If anyone can explain in a reasonable way why this is actually worth getting excited about, please let me know and  I’d be grateful to concede defeat.

      • #3121676

        Intel Macs, what’s the big deal?

        by sbd ·

        In reply to Intel Macs, what’s the big deal?

        WOW!!!

      • #3127551

        Intel Macs, what’s the big deal?

        by salmonslayer ·

        In reply to Intel Macs, what’s the big deal?

        One word — KVM (okay, so actually it isn’t a word but an acronym)

        I have also gone down the dual-boot road, and ran into the same
        roadblocks.  For a while I had three OSs on my primary workstation
        (Linux, OS/2 and Windows 98).  I rarely ran OS/2 — I had some
        great graphics programs (and still think DeScribe is one of the best
        word processors out there) but it took too much time to shut down,
        restart, boot in another OS and then do the work.  OS/2 was the
        first to go.  I managed to get a second computer relatively cheap
        so now have two systems connected via a KVM switch.  One has
        Windows XP and one has Linux, and both are networked.  The best
        thing is that I can bounce back and forth with only a simple keystroke,
        can access files from either system, and generally use the best of both
        worlds.  This makes life considerably easier than the old days,
        and sometimes I really wonder why I bothered with dual-booting at all.

    • #3119931

      Thin Computing Is Rarely “Thin”

      by justin james ·

      In reply to Critical Thinking

      For some reason, a huge number of people on ZDNet, both in TalkBacks and in articles (Mr. Berlind, are you reading?) confuse a central file server with “thin computing”.

      Mr. Berlind’s example situation (a really bad one, at that) is simply using a web browser as a thick client application to access a central file server. Especially in the AJAX world, clients get thicker not thinner! For example, I have been doing some writing on this website right here. Their blogging software puts such a heavy demand on my system at home, that it was taking up to thirty seconds for text to appear on the screen. Each keystroke caused my cursor to be the “arrow + hourglass” cursor. Granted, my computer at home is no screamer (Athlon 1900+, 256 MB RAM) but that is what AJAX does. It requires an extraordinaily thick client to run. If I compared that AJAXed system to, say, using MS Word (a “thick client” piece of software) or SSHing to a BSD box and running vi ( a true “thin client” situation), that AJAX system comes out dead last in terms or CPU usage. And after my system does all of the processing for formatting and whatnot, what happens? It stores the data (via HTTP POST) to a central server, which stores that information somewhere, performs a bit of error checking and then makes a few entries into a database table somewhere. If I compare the CPU usage on my system by the AJAX interface, to the CPU usage of the server to file my entry, it sure doesn’t look like a “thin client/thick server” situation to me! It looks a heck of a lot closer to “thick client/dumb file server” story.

      Some of the comments to this story are already making this mistake. “Oh, I like the idea of having all of my data on a central server.” So do I. This is how everyone except for small businesses and home users have been doing it for decades. The fact that most business people stick all of their documents onto their local hard drives is due to a failure of the IT departments to do their jobs properly, for which they should be fired. Why are people in marketing sticking their data on the local drive instead of on the server? Anyone who saves data locally and loses it should be fired too, because if their IT department did the job properly, this would not be possible without some hacking going on. The correct setup (at least in a Windows network, which is what most people are using, even if it’s *Nix with Samba on the backend) should that there is a policy in place which sets “My Documents” to a network drive. This network drives gets backed up every night, blah blah blah. The only things that go onto the local drive should be the operating system, software that needs to be stored locally (or is too large to carry over the network in a timely fashion), and local settings. That’s it. And if someone’s PC blows up, you can just flash a new image onto a drive and they’re up and running again in a few minutes.

      Now, once we get away from standard IT best practices, what is “thin client computing”? We’re already storing our data on a dumb central server, but doing all of the processing locally. Is AJAX “thin computing”? Not really, since the client needs to be a lot thicker than the server. Is installing Office to a central computer “thin computing” but having it run locally? Not at all. Yet people (Mr. Berlind, for starters) seem to think that storing a Java JAR file or two on a central server, downloading it “on demand” and running it locally is thin computing.

      Thin computing does not occur until the vast majority of the processing occurs on the central server. That is it. Even browsing the web is not “thin computing”. Note that a dinky little Linux or BSD server can dole out hundreds or thousands of requests per minutes. I challenge you to have your PC render hundreds or thousands of web pages per minute. Indeed, even a Windows 2003 server can process a hundred complex ASP.Net requests in the amount of time it takes one of its clients to render on the screen one of those requests. I don’t call that “thin computing”.

      Citrix is “thin computing”. Windows Terminal Services/Remote Desktop is “thin computing”. WYSE green screens are “thin computing”. X Terminals (if the “client” {remember, X Windows has “client”  and “server” backwards} is not local) are “thin computing. Note what each one of these systems have in common. They are focused on having the display rendered remotely, then transferred bit-by-bit to the client, which merely replicates what is rendered by the server. All the client does is transfer the user’s input directly to the server, which then sends the results of that input to the client, which renders the feedback as a bitmap or text. None of these system require any branching logic or computing or number crunching or whatever by the client. That is thin computing.

      Stop confusing a client/server network with “thin computing”. Just stop. Too many articles and comments that I have seen do this. They talk about the wonders of thin computing, like being able to have all of the data in a central repository to be backed up, or have all of my user settings in a central repository to follow me wherever I go, or whatever. I really don’t see anyone saying anything that is “thin computing” that does already exist in the current client/server model. The only thing that seems to be changing is that people are leaving protocols like NFS and SMB for protocols like HTTP. It’s a network protocol folks. Wake up. It’s pretty irrelevant how the data gets tranferred over the wire, or what metadata is attached or whatever. It does not matter. All that matters is what portion of the processing occurs on the client versus the server, and in all of these siutations people are listing, the client is still doing the heavy lifting.

      • #3119877

        Thin Computing Is Rarely

        by Jay Garmon ·

        In reply to Thin Computing Is Rarely “Thin”

        Preach on, brother!

      • #3131441

        Thin Computing Is Rarely

        by jdgeek ·

        In reply to Thin Computing Is Rarely “Thin”

        OK, can we use the term psuedo-thin without raising your ire?  There is certainly a noteworthy difference between a computing environment that relies on standards compliant web clients as the one managed application versus having a seperate client for each task. Maybe thin versus thick is not the best description of that difference.  Psuedo-thin may not be truly thin, but it is at least thinner on the administrative side, and arguably thinner on the client side.  It seems the real difference is in the network intelligence.  In psuedo-thin, not only the data, but also the logic comes from the server.  In this way psuedo-thin is like truly thin, even if the interpretation and rendering take more processing horsepower on the client.

        Instead of belittling others, why don’t you suggest a new terminology?  You might go down in history as the guy who first identified the dumb thick client.  Dumb….thick… wait, I’ve got it!  It’s the Anna Nicole client.

      • #3131325

        Thin Computing Is Rarely

        by justin james ·

        In reply to Thin Computing Is Rarely “Thin”

        Sure, we can say “pseudo-thin”. 🙂 You make a very good point, that with a web-application, all of the logic is stored and maintained on the central server, even if the execution of the logic occurs on the clients’ side. That is a large benefit of a web application (and that the application can be accessed from a wide variety of clients, although cross-platform compatability is still a huge issue). You can also do the same thing, however, through a centrally managed “push” system like Microsoft Systems Manager. To me, there really isn’t too much difference on a logical basis between an application where the installation gets pushed by a central server to a client, and run from the client’s local system when needed, and an application that gets pulled by the client when wanted. They both have their advantages and disadvantages. An application push can send an incredibly rich, native application, and only needs to do it once. Once the initial push is over, the client doesn’t need to interact with that server anymore. A pull, on the other hand, forces the application to be relatively lightweight. Imagine trying to pull MS Word or OpenOffice down the pipe everytime you want to use it… that would be pretty miserable. Sure, you could cache it, but at that point, the hardware resources needed for a push and a pull are now the same, and the two become nearly indistinguishable in terms of functionality, except that with a cached pull model, the break in workflow occurs the first time you try to use the application, whereas a push model interrupts you (or hopefully, works in the background) at random times like bootup.

        Sure, I focused a bit on terminology in the blog, but the underlying assumptions are what I’m attacking. People are tossing around phrases like “thin computing” which have a meaning of their own, when what they really mean is “client/server network with advanced functionality” or (to use your term), “pseudo-thin client”. Improper usage of terminology leads to miscommunication and misunderstanding. If I used the word “red” where most people use the word “pink”, I’m not going to do a very good job working as a sales person for clothing, particularly over the phone. If people misuse the term “thin computing”, they aren’t doing a very good job at communicating, especially if they are paid journalists.

        J.Ja

      • #3130551

        Thin Computing Is Rarely

        by jdgeek ·

        In reply to Thin Computing Is Rarely “Thin”

        I agree with you about terminology, sometimes I just can’t help playing devil’s advocate.

        Another significant advantage to the web services approach is a kind of sandboxing.  While you are correct that push vs. pull is probably not a major difference, there is a major difference in having only one application (i.e. the browser) run code natively.  I believe this significanlty decreases your security exposure.

        Also, I assume it is easier to deploy non-standard, custom, or lightly used apps using a web service.  Although I don’t have any experience with SMS, by deploying apps through a web server, you move to a two phase develop then deploy model as apposed to the SMS develop, package, deploy model.  I’m not sure how difficult it is to package an application in SMS, but I’m sure that creating a cross platform package that does not break existing applications is not necessarily trivial.

        Anyhow, good work on the thin client blog and thanks for an interesting discussion.

      • #3131680

        Thin Computing Is Rarely

        by stress junkie ·

        In reply to Thin Computing Is Rarely “Thin”

        I agree with J.Ja. There have been many instances when a given term has
        enjoyed a clear definition for many years, then one day people start to
        misuse the term. The next thing you know the original definition is
        lost. This is the inspiration for the expression “Newbies ruin
        everything.” which I have been saying for a long time.

      • #3044038

        Thin Computing Is Rarely

        by jcagle ·

        In reply to Thin Computing Is Rarely “Thin”

        I have to disagree with one point made in this article.

        In this article, it said no one should be saving their files to the local drives, but to the server. With a thin client, that may be what you have to do.

        However, I’ve found that working from the network is generally a bad idea. Networks go down sometimes, and they can be slowed down by too many people working from the network.

        At my school, they recommend we work from the hard drive. I’m going to school for graphic design and web development. When you start working with Photoshop and Illustrator, you do not want to work over the network. We work from the hard drive and back up to the network server, USB Flash drive, CD-R, etc.

        Maybe it won’t seem like much of a problem on a smaller network if you’re just handling Word documents or something. But I think in most cases, working from the hard drive and backing up to server, flash drive, CD-R, etc is the best idea.

        Again, this isn’t a thin computing situation, but the standard IT practices. Trust me, I don’t want to be working in Photoshop with a project due, working on the network, and then the network goes down or is slowed down due to the fact that huges files are being worked on over the network.

    • #3122828

      Technology I’m Thankful For

      by justin james ·

      In reply to Critical Thinking

      Well, here it is, Thanksgiving! And I’m trying to find some technologies that I am thankful to have in my life, since lately technology has been doing its best to make my life unhappy. So here’s what I’ve come up with:

      CD Players: Storage capacity and size aside, there is nothing that an MP3 player can do that a CD player can’t. And CDs are easy. Most importantly, the players are cheap now. CDs have been part of my life for 15 years now, and I’m always happy to have them in my life. And unlike just about everything else tech now, I’ve never had one crash on me.

      IBM ThinkPad 390E: Never heard of this model? Not surprising. It’s a PII 300 mHz system with 160 MB RAM (I’m sure it’s more, but I don’t know how kmuch was allocated to video). Sure, it’s slow. The CD drive doesn’t recognize that a disc is in there. I got a floppy jammed into the drive a few nights ago (trying to find a good floppy to start a BSD install with). With XP on it, it is so slow that it can’t play the wave file Windows sounds properly. It’s fairly heavy and has few features. It is so outdated that it does not have a built-in Ethernet port. But it has one thing that no other piece of equipment in my life has: durability. The thing is a tank. If I got into a fight, I would rather be armed with the 390E than a knife. And I’m sure it would still work afterwards. It doesn’t crash, either. For the two or three times I go on the road a year, I know that I can count on it to give me just enough connectivity to survive. Most importantly, when I have a major hardware problem, it tides me over until I can resolve the problems with my other machines.

      Microsoft’s .Net Framework: I don’t care that is is nearly as sloppy and inefficient as Java, or that it is not cross-platform. .Net has saved me hundreds of hours of coding time, and Microsoft’s fantastic documentation has saved me at least a few dozen hours in the last year. Compared to working in Java, .Net is a dream. Visual Studio is a great IDE, and its tight coupling with IIS gives me debugging powers on web dev projects that I never found with Java, Perl, or PHP. This saved me even more time and leads to better code.

      Cell Phones: I live and die by the cell phone. I haven’t had a landline in nearly four years, and don’t miss it at all. My cell phone is cheaper than a landline, too. I like not having to take personal calls on a work phone, being able to leave the house and not interupt a call, and I appreciate the ability to send text messages and email on the road, miserable as the interface may be, in those clutch situations. There are a lot of great things (and important things) that I would have missed without the ability to be reached wherever I may be, since I am so infrequently at home.

      USB: The number of peripherals has skyrocketed since the invention of USB. High speed, easy device connectivity, easy to add multiple devices and not be limited by thenumber of ports on the machine… USB has introduced us to a whole new world of computing options. Digital cameras, scanners, web cams, all of these cool things would not be nearly as widespread as they are today without USB.

      Digital Cameras: OK, so my ex is holding on to mind lately because she’s been wanting to take a lot of pictures. But when she isn’t, I carry mine around everywhere. I used to do the same with a film camera, but then it would sit until I got around to getting new film, and I hated the development costs and wait and everything else. I love digital cameras, and have since I got my first one.

      Inexpensive Broadband: Ever since 2000, I’ve been a cable modem user. It is as cheap as a second phone line + and ISP account, a billion times faster, and super-reliable. Broadband has made my life infinitely less frustrating, and for that alone, it gets my thanks.

      So that’s my list for now. What’s on your list?

      J.Ja

    • #3127264

      Intel drives Apple sales up in 2006?

      by justin james ·

      In reply to Critical Thinking

      “Apple Sales Mushroom, Thanks To Intel CPUs”

      That is one headline that we will definitely not be seeing in 2006. Standard business practice to making a product sell, is to accomplish at least one of the following: better product for the same price, equal product at a better price, or superior customer perception of product, regardless of price. In other words, it either needs to be better, cheaper, or marketed as such.

      I am not going to dispute Apple’s back office reasons for switching to the Intel CPUs. They had a rocky relationship with IBM and Motorola, and the PPC platform was not going where Apple needed it to. Intel offered them a way out, and Apple had conveniently been maintaining an x86 version of OSX the whole time. If Apple had been maintaining a SPARC version of OSX instead of x86, we would be hearing about a Sun/Apple partnership right now. The decision has been made, the code has been written and is being tested. It is a done deal.

      But those who think that this deal will significantly boost sales of Macintosh computers are dead wrong. The Intel architechture simply does not add value, reduce prices, or make the product more marketable. Here is why.

      Increased Value

      This is a simple question to ask. “Does the Intel architechture make a Macintosh any better?” Currently, no, it does not. Yes, the Intel chips are running at a higher clock speed than many of the G4 and G5 CPUs that Apple will be replacing, particularly on the low-end. The mini and iBook are running fairly old chip designs. But remember our business rules here: Apple needs to offer a better product at the same price.

      Better Product?

      The switch to Intel CPUs will not make the Macs better at first. Yes, the Intel architechture offers a better roadmap for the future, which means that the Macs a year, two years from now will be better than they would be had Apple stayed with the current PPC chips. But if IBM had devoted as much time to Apple as they have to Sony and Microsoft (for the PS3 and XBox 360, respectively), then the PPC Mac roadmap would be a five lane highway compared to Intel’s dirt roads. The x86 architechture also is not offering any new features above and beyond what PPC should have been offering. It isn’t like they are building ATI Radeon 800’s or integrating 200 GB hard drives onto the CPU or something.

      Yes, the Intel chips of today may have a higher clock speed and even perform more FLOPS than the PPCs of today. But without x86 native binaries, and nearly every piece of software running through the Rosetta translation layer, I am positive that for the first six months minimum, if not as long as two years, software running on the Intel Macs will not be noticeably faster than software on equivalent PPC Macs. I am pretty sure that it will actually run slower for many, if not most applications, for some time.

      I have also written extensively about how the Intel switch will not bring any other tangible “value adds” to Mac users.

      On the “better product” angle, the switch to Intel CPUs simply allows Apple to continue at the rate of improvement that they should have had with PPC, which simply means that (at best) they will be producing the same numbers as Dell, HP, Gateway, etc. That is hardly an “imporvement”. Any speed gains they get will have to come through superior OS design, and relies upon the availability of x86 native binaries. Considering my experiences with finding x64 native binaries and device drivers for my new Windows x64 XP Professional PC, I think that the Mac users are going to be quite shocked at how little will be available for them, at the time of launch for sure, and even much further down the road.

      Same Price?

      Paul Murphy, over at ZDNet, recently wrote a very interesting article regarding the Intel CPU switch. To summarize: a low-end Intel CPU costs $240, the G4 that is used in the iBook costs $72. That is a price increase of about $170. Apple’s number one barrier to entry is the cost of their products. Mr. Murphy’s numbers seem sound to me. To be honest, I stopped watching CPU prices around the time I bought a 486 DX4 120 mHz system, so I cannot verify them. But they seem about right.

      No Value Added

      If Apple had gone with AMD (particularly the AMD x64 CPUs) or even SPARC, I would not need to write this article. They would be delivering superior performance at the same, if not less price, then they are now. But they went with Intel. They are paying more money for less performance. Which means that they must either be offering the same value at a better price, or find a way to increase its markeability.

      Same Product, But Cheaper

      Same Product?

      As discussed earlier, the Intel Macs will be, at best, as good as the current Macs, and the Intel roadmap is no better than what the PPC roadmap should have been. I am going to give both Intel, and the Rosetta translation layer the benefit of the doubt, and allow that the Intel Macs will be as good as the current offerings. It isn’t inconceivable, that is for sure. I have no way of knowing without seeing some cold, hard numbers when the final version is released. None of the reviews I have read of the developer kits mentioned performance in any memorable way, so I am guessing that performance was not on either extreme of the scale. So we can assume that (once again) at best, the Intel Macs will be the same product as the currents Macs.

      Cheaper Price?

      This section definitely feels like deja vu. The Intel Macs will not (and cannot!) be any cheaper, at least on the low end, than their PPC counterparts. I have not seen prices for the high end G5 chips, but it is possible that the big PowerMacs will be $100 – $200 cheaper than they are now. Considering their current prices, that is a welcome price drop, but still hardly enough to suddenly make them a bargain.

      No Pricing Advantage

      Once again, I do not see the Intel CPU architecture giving Apple an advantage that they do not currently have. Since Apple isn’t going to be making a better product, or a cheaper one, that leaves them with only one way to significantly grow Mac sales…

      Marketing

      Without a better product or a better price, Apple must rely upon the arcane art of product marketing, based around the switch to the Intel CPUs to make sales jump at all, and possibly even to reassure the faitful that this is not a Bad ThingTM.

      “Intel Inside” is one of the most pervasive ad campaigns out there. The vast majority of the computing users worldwide use a computer with an “Intel Inside” sticker on it. But “Intel Inside” will not bring a single advantage to the Apple marketing strategy. There are a few reasons for this, based in the “Intel Inside” campaign, as well as Apple’s current marketing efforts.

      Intel Inside

      What does “Intel Inside” promise to the end user, to make it something worth paying extra money for? Let’s look at the History of “Intel Inside” website to find out. Apparently, “Intel Inside” was designed to present the following ideas to customers:

      • It matters who makes the CPU within your system. Intel wanted the consumer to equte “Intel” with “safety”, “leading technology”, and “reliability”.
      • There is an easy way (the stickers, the little noise in ads, the logos in ads, etc.) to easily identify what computers use Intel CPUs and which ones don’t.
      • Early “Intel Inside” campaigns stressed “speed, power, and affordability”.
      • Current “Intel Inside” efforts focus upon “technology leadership, quality, and reliability”.

      When one looks at Intel’s overall branding efforts, this is definitely what we see. They stress that Intel CPUs are the gold standard for x86 compatability, that PCs work best with Intel CPUs (the Centrino campaign takes this message to nearly scandalous lengths, presenting users with the idea that their computer will not work with non-Centrino equipment through dodgy wording), and that Intel is constantly innovating.

      Apple Switch

      To grow market share, Apple needs to cut into the PC market. Yes, having 3% – 5% of a market the size of the PC market is still a very nice slice of revenue, but they want to, and need to grow it. They have known this for a long time now. Apple counts on existing Mac owners to keep buying new Macs. From what I can tell, with the exception of those forced to go to PCs for financial or business reasons, that holds true. I have never met a Windows user who actually seems delighted to be using Windows. I have never met a Mac user (one who chooses to use a Mac, that is) who fails to provide free advertising for Macs. They definitely have something good going on there. The fact that Apple’s market share hasn’t been shrinking while the computer pie gets bigger says that Apple is signing up new users as fast as the market expands. That is a good thing for them. Due to the nature of the situation, Apple has been throwing their energies (rightfully, in my opinion) at existing PC users. Let’s take a look at the “Apple Switch” marketing campaign.

      • “It just works”
      • “As easy as iPod”
      • “Picture-perfect photos”
      • “It’s a musical instrument”
      • “Home movies in HD”
      • “Online streamlined”
      • “Join the party” (about Apple’s support and community)
      • “It loves road trips” (portability, easy connectivity)
      • “It does Windows” (ease of opening/editing files produced on Windows PCs)
      • “It’s beautiful”

      I do not see any convergence, synergy, or shared edges (how’s that for some buzzwords!) with the “Intel Inside” campaign. In other words, “Intel Inside” does not offer any additional zing to the Apple Switch campaign. Indeed, most of the “Intel Inside” campaign actually works in reverse for Apple. OSX on x86 is untested, there are (and will be for some time) few native binaries, it will not be faster (and quite possibly be slower), and so forth. Chances are, OSX on x86 will be less reliable than OSX on PPC. OSX suceeded laregly because the “Classic Mode” worked so well. If people running Mac OS 9 programs had huge problems, no one would have upgraded, or have been willing to buy a new Mac with OSX installed. But Classic Mode worked, and worked well, and worked fast, and OSX was adopted.

      “Intel Inside” will certainly be a tough sell to the current Mac market, and it will not help Apple at all in competing against Windows. It reduces their hardware, in the minds of the consumer, to a commodity device, equal to a PC. Will a potential buyer be happy about paying significantly more money for nearly identical hardware, simply because the OS is different? Probably not. In terms of marketing, I think “Intel Inside” is dismal. Sony is not a super-huge PC seller for this reason. Sure, their computers have nice design and are packed with features. But if you are willing to settle for an uglier case, keyboard, mouse, etc. then you can save hundreds of dollars by purchasing a similarly equipped HP, Dell, eMachines, etc. PC. With Intel hardware, Apple positions itself as another Sony, and still has to overcome the problem where consumers perceive a Mac as being incompatable with what they want to do.

      Conclusion

      “Intel Inside” will not drive Mac sales at all. If Apple sales rise significantly in the near future, it will be because they are either able to substantially improve the value of a Mac, either by being able to offer faster/more hardware for the same price, or by dropping the price quite a bit. The mini shows that people are willing to buy a Mac, if the price is right. Apple needs to get that price even better. That is how they will grow market share. If the Intel CPUs could do that, I would beleive that the switch to Intel will drive sales. Unfortunately, Apple chose Intel, instead of Sun or AMD to work with, and we are going to be stuck with overpriced, underpowered Macs for the foreseable future.

      • #3121415

        Intel drives Apple sales up in 2006?

        by guy_sewell ·

        In reply to Intel drives Apple sales up in 2006?

        I believe
        your reasoning is sound but you are misinterpreting some of the conditions.function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”khtml-block-placeholder”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>For the
        PC crowd:function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>1. To
        grow market share Apple needs to attract switchers.? The Mac faithful/fanatics are sold.? But to a PC person a Macintel has
        obvious increased value.? I can run
        windows and my favorite PC-software, but also choice Mac stuff. (more value,
        equivalent price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>2. The
        increased value is the operating system.?
        I use PCs and Macs. For a non-IT professional there is NO question
        productivity increases and headaches decrease on a Mac. (more value, same
        price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”khtml-block-placeholder”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>For the
        Mac crowd:function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>3. A dual
        core laptop will show significant increases in processing power as compared to
        current G4 models. (more value, same price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>4. Laptop
        features have been slow (slowed?) to evolve lately on the Mac.? We should see a significant performance
        increase in weight reduction and battery life, as well as in processing power
        and AV performance due to the incorporation of Intels new
        platforms/technologies.? This is
        not a unique Apple only advantage from the PC world side, but it will be a big
        boost to the Apple faithful. (more value, same price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”khtml-block-placeholder”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>Every
        consumer (Mac or PC) will see increased value, and as a premium brand this is
        how Apple increases sales, not by being cheaper, but better.function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”khtml-block-placeholder”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>The real
        challenge is will developer still make Mac specific software if you can run
        windows stuff on a Mac (the OS2 effect).?
        But 2006 is not 1990, much of the consumer and some (more everyday) of
        the pro-stuff is from Apple itself.?
        You can bet, it either will be native, or will be soon when Macintels arrive.? And if big developers are slow to
        support, Apple has shown they will fill the gap and make money doing it (watch
        out MS and Adobe).function (match)
        {
        return match.toLowerCase();
        }>

      • #3121402

        Intel drives Apple sales up in 2006?

        by wizkidsah ·

        In reply to Intel drives Apple sales up in 2006?

        I think many people shy away from buying a mac due to concerns about incompatibilities with Window/their company etc.  For $85 or less, you will be able to install Windows XP on the MacTel systems and run your XP applications.  I do think many people will try a Mac if they are comfortable they can still do “Windows” things if they need to.  I build my XP boxes, but I am looking forward to a high end MacTel box I can just install XP on.  Why bother with building them anymore?  As a PC gamer, I’m excited that it appears the new MacTel systems won’t need slightly modified cards from ATI and nvidia anymore.  Forget about the price differentials of the CPUs, if I can buy some of those (relatively) cheap cards built for PCs and stick them in my Mactel box… or not wait 4 months for the Mac version to come out, that’s huge.  Its really speaks to being able to use chipsets and other “off-the-shelf” components.  Its a bear for Apple to develop proprietary chipsets for PowerPC processors, and that component is not cheap.  Now they can use Intel’s chipsets.  Some of the advantages to this switch go beyond the processor. 

        Away from that, your arguments are sound – I don’t see having another processor exciting consumers, although I haven’t seen many people who think that it would beyond what I just described.  People haven’t been running up to me going “Yay apple is switching to Intel!”  Most people that buy Macs, or PCs for that matter, could care less.  Gamers and tech geeks are another story, but those are niche markets.  But all of the fence sitters I know looking at macs from afar, when I tell them about XP running on Macs if you install it yourself, truly think their last reservation has been removed.  If rational, many of them should buy macs next year.  We’ll see.

      • #3121401

        Intel drives Apple sales up in 2006?

        by guy_sewell ·

        In reply to Intel drives Apple sales up in 2006?

        function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>I believe
        your reasoning is sound but you are misinterpreting some of the conditions.function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>For the
        PC crowd:function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>1. To
        grow market share Apple needs to attract switchers.? The Mac faithful/fanatics are sold.? But to a PC person a Macintel has
        obvious increased value.? I can run
        windows and my favorite PC-software, but also choice Mac stuff. (more value,
        equivalent price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>2. The
        increased value is the operating system.?
        I use PCs and Macs. For a non-IT professional there is NO question
        productivity increases and headaches decrease on a Mac. (more value, same
        price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>For the
        Mac crowd:function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>3. A dual
        core laptop will show significant increases in processing power as compared to
        current G4 models. (more value, same price).function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>4. Laptop
        features have been slow (slowed?) to evolve lately on the Mac.? We should see a significant performance
        increase in weight reduction and battery life, as well as in processing power
        and AV performance due to the incorporation of Intels new platforms/technologies.? This is not a unique Apple only
        advantage from the PC world side, but it will be a big boost to the Apple
        faithful. (more value, same price).function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>?function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>Every
        consumer (Mac or PC) will see increased value, and as a premium brand this is
        how Apple increases sales, not by being cheaper, but better.function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>?function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>The real
        challenge is will developer still make Mac specific software if you can run
        windows stuff on a Mac (the OS2 effect).?
        But 2006 is not 1990, much of the consumer and some (more everyday) of
        the pro-stuff is from Apple itself.?
        You can bet, it either will be native, or will be soon when Macintels
        arrive.? And if big developers are
        slow to support, Apple has shown they will fill the gap and make money doing it
        (watch out MS and Adobe).function (match)
        {
        return match.toLowerCase();
        }>

      • #3121397

        Intel drives Apple sales up in 2006?

        by vuong.pham ·

        In reply to Intel drives Apple sales up in 2006?

        Total garbage.
        sorry but the flaws in your points contained in this piece are horrible.

        I disagree with our points about “Classic Mode” OSX wasn’t adpoted
        because of the mere exsistance of a OS virtualization contained in its
        own memory space. For me personally the real improvements came and thus
        the “reasons” for adopting OSX was the modern features the OS 9 could
        not provide. Applications making the transition into native osx app
        world that made the difference. Case in point the transition from 040
        to PowerPC. Where were you? Fat binaries did work very well and the
        compensation of the performance from the PowerPC CPU made  a large
        difference.  Comparing Classic Mode with OSX adoption is
        incorrect. You should be examining the Rosetta  software.

        “Chances are OSX on x86 will be less reliable than OSX on PPC” Where
        did you drag up this? Where are your quanitative analyses? NUMBERS ?
        STATs? or just pure SWAGing.

        Driving the sales of any computer is a ratio of price / performance and ROI. OSX is a huge factor in that equation.
        Comparing wintel (windows + intel) and Mac-intel.. the initial
        difference is how well does the OS take command of the CPU? How well
        designed are the subsystems I/O video etc. Does the OS provide enough
        control of the hardware to squeeze all the performance out of the
        system?  Case in point:

        Intial the release of BeOS, RedHat etc showed that the OS was
        fantastic. But the end user doesn’t sit around all day drawing windows
        or running benchmarks. The ROI is how much productivity can be
        accomplished.

        The intel transition will be that a transition, and if history serves
        as lesson to be learned. Apple will be paying close attention.

        As for me .. am I biased. Not really, I am solutions driven.
        Application developers will determine if a viable solution will exist
        with the new marriage of hardware and Operating system.  Price
        point is only one factor.

        Overpriced is a myth, when you compare the real compoent level purchase
        of computer systems. Sure most intel boxes sold are run of the mill and
        not so special, hence “commodity” but for example ibook vs other 999.00
        systems value is there with the performance to match. iMac G5 systems.
        As for dual and quad core sytemss have you recently compared the wintel
        versions of the dual core systems. WITH the associated subsystems…
        compare straight across the board and cost analysis will show not much
        price differential.

        A 3000.00 system is a 3000.00 system.
        -Vuong Pham

      • #3121395

        Intel drives Apple sales up in 2006?

        by guy_sewell ·

        In reply to Intel drives Apple sales up in 2006?

        function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>I believe
        your reasoning is sound but you are misinterpreting some of the conditions.function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>For the
        PC crowd:function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>1. To
        grow market share Apple needs to attract switchers.? The Mac faithful/fanatics are sold.? But to a PC person a Macintel has
        obvious increased value.? I can run
        windows and my favorite PC-software, but also choice Mac stuff. (more value,
        equivalent price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>?function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>2. The
        increased value is the operating system.?
        I use PCs and Macs. For a non-IT professional there is NO question
        productivity increases and headaches decrease on a Mac. (more value, same
        price)function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>For the
        Mac crowd:function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>3. A dual
        core laptop will show significant increases in processing power as compared to
        current G4 models. (more value, same price).function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>?function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>4. Laptop
        features have been slow (slowed?) to evolve lately on the Mac.? We should see a significant performance
        increase in weight reduction and battery life, as well as in processing power
        and AV performance due to the incorporation of Intels new platforms/technologies.? This is not a unique Apple only
        advantage from the PC world side, but it will be a big boost to the Apple
        faithful. (more value, same price).function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>Every
        consumer (Mac or PC) will see increased value, and as a premium brand this is
        how Apple increases sales, not by being cheaper, but better.function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”MsoNormal”>The real
        challenge is will developer still make Mac specific software if you can run
        windows stuff on a Mac (the OS2 effect).?
        But 2006 is not 1990, much of the consumer and some (more everyday) of
        the pro-stuff is from Apple itself.?
        You can bet, it either will be native, or will be soon when Macintels
        arrive.? And if big developers are
        slow to support, Apple has shown they will fill the gap and make money doing it
        (watch out MS and Adobe).function (match)
        {
        return match.toLowerCase();
        }>

      • #3121386

        Intel drives Apple sales up in 2006?

        by oharag1 ·

        In reply to Intel drives Apple sales up in 2006?

        I think you are dead wrong.

        A recent benchmark over at anandtech shows the upcoming Yonah chip actually beats AMDs 64 X2 3800+ chip, and even the 64 X2 4200+ chip in some instances. Understand the X2 are desktop chips, and Yonah is a laptop chip. Intel is driving forward with 65nm chips quicker than AMD. AMD has to rely on contract mfg to make advances in mfg processes (i.e. IBM). Also, the chipset coming up for the Yonah will offer higher bus speeds and video speeds than what is currently availble on the PowerBooks with PowerPC. This is just the start. I believe the chips coming out of the Intel camp in the next two to three years will be amazing.

        Add the fact that new Macintoshs will actually boot Windows either by itself or dual boot is just amazing. I can still run (as can all the millions of PC users) my PC apps, but then use MacOS X as well. I also believe with a common chip design (Intel) more people may start to port their PC only apps to the Mac. I think things look great for Mac users in the future!

        http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2627&p=1

      • #3121292

        Intel drives Apple sales up in 2006?

        by yespapers ·

        In reply to Intel drives Apple sales up in 2006?

        I think one point that’s missing here is perception.? As with so much in the tech business, perception counts as much or more than reality.? The perception that Apple products now run Intel chips (and, as anyone will tell you, only “real” computers use them, right?) means that Macs are now officially real computers and can be considered as a viable alternative to a Windows machine.

        Another point is Apple HAD to get off of IBM chips.? IBM was just not going to put the money into desktops when they could make a bundle in gaming consoles. This meant as computing power demands and lower power requirements grew, Apple would be more and more at a disadvantage.

        And using the Sun SPARC, are you kidding?? Talk about a costly chip with pontential supply problems in the quantities Apple would need.? I do agree that it is one stellar processor but hardly right for Apple’s markets not to mention the rewrite of OS X to take advantage of it.

        The real downside in all this is loosing the current and future G5 chip.? In the high performance computing market the G5, along with the AMD Opteron, was a real winner over anything Intel currently has. Intel is definitely catching up and will most likely surpass AMD but who knows?? And also who’s to say that OS X won’t ever run on an AMD machine?? So far the Intel version of OS X has been hacked to run on all kinds of platforms apart from the Apple Mactel box.

        I think the next year or two will be really interesting for Apple. They deserve a lot of credit for what will be their third major product transition; 680XX to PPC, Mac OS Classic to OS X and now PPC to Intel.

        You go Apple!

      • #3121213

        Intel drives Apple sales up in 2006?

        by davemori ·

        In reply to Intel drives Apple sales up in 2006?

        Disagree.

        I guess that you have never had to produce and ship an actual product in the Silicon Valley.

        A lot of the production constraints as to the number of Macs that Apple could make (as well as Mac Clones, back in 1997), could be directly pinned back to the ultimate limitation in the number of PowerPC CPUs that IBM and Motorola could produce in a given time period.

        Market share, in turn, has been limited by the number of Macs that can be produced — which is a direct function of the number of CPUs produced.  In 1998, Intel-AMD-Cyrix were producing about 50 million CPUs.  Motorola and IBM were producing about 5 million PowerPCs (excluding embedded PowerPC processors).  The number has remained at about the same level, while Intel-AMD have boosted production capabilities to exceed 180 million CPUs.

        An Intel solution breaks the production limitations, and gives Apple an open door to using AMD processors if it sees a value in doing so.  It also can continue to use PowerPC.  Nothing says that is absolutely has to abandon the PowerPC processor.  As for SPARC, all that would do is put Apple at the mercy of Sun Microsystems’ SPARC production run limitations.  Sun’s SPARC annual CPU production runs are puny in numbers compared to Intel and AMD or PowerPC, and even AMD cannot produce as many CPUs as Intel. 

        There is very little reason to believe that any application or OS, even the Mac OS, leveraging OpenGL, open standards, Open Source and LinUX and a Universal Binary will not work as well on AMD as it would on Intel.

        Even back in 1996, something like 95% of the connectors, chips, surface devices materials on a PowerMac logic board or in the chassis (power supplies, etc) were industry standard with what was contemporary with Intel PCs at the time.  The other 5% consisted of ROMs and a limited number of Apple designed ASICs.  Apple clearly has a long time understanding of what it can get from economies of scale and production boosts from using industry standard components.

        There is no payoff in making lots of components on your Bill of Materials yourself, when someone else can do them more cheaply and deliver them in quantities of tens of millions per month.  Supply of CPUs and components has everything to do with how many units you can produce.

        Apple’s purpose is not industry market dominance.  Its purpose is to be profitable to its own shareholders by providing products to its customers that are perceived to create customer value.

        If the CPU switch does not create massive tangible value adds, it still succeeds if the Intel based Mac works equally as well as the PowerPC based Mac and if customer demand is met faster because the product availability is faster and in greater quantities than before.

        Apple continues to defy industry pundits each year for decades by selling off all of its production run of Macs.  If it is suddenly able to increase its supply by even 25% — much less doubling or tripling it — Apple wins.

        There is no reason to believe that the OS and apps won’t be stable under Intel.  LinUX works exceptionally well under x86.  If there is anything unstable, it is Windows, and its instabilities are not the fault of the hardware.

        There were similar arguments out of the industry in the 1990s when PowerPC was announced.  Performance issues in the initial PowerPCs were more due to the lack of a fully native OS and file system than compiled apps, and performance was still better than an 030 or 040 could deliver in most instances.  The fat binary approach used by Apple was brilliant.  Universal binaries used now are better.  Apple has clearly shown that it still remembers those arguments, and has found some new ways to make porting easier and even more compelling.  It has also shown that it clearly remembers the nightmare lessons of MacOS licensing, when licensees were deliberately deviating form the PowerPC Mac Common Hardware Reference Platform and generating instabilities in the OS that got dumped off on Apple’s tech support to the tune of millions per day in support calls for machines not made by Apple.

        While Apple can probably always improve on a price, that is not the only way to grow market share.  While increased market share is of no doubt important to Apple, it is not the overriding quest of Apple.

        As for the marketing programs at Apple – remember that marketing campaigns are designed by Madison Avenue companies.  If they miss the mark for some of us, shame as much on the adverising company who came up with the campaign as Apple should be shamed for buying into it.  The blame does not reside on Apple alone.

        The industry as a whole is at a point for several years where CPU speed increases now have to be kludged with multiple core processors, etc. due to limitations on the speed of memory, etc.  Moveover, additional hardware speed is becoming irrelevant when it does not appreciably increase the speed of your apps, MS Office, your browser, etc.

        Speed improvements over last year’s models is already less of an issue on Intel.  No reason to think that it is that much of an issue on a Mac.

        Apple has done a decent job of demonstrating the value of dual core on processor bound apps like Final Cut Pro with symmetrical capture-compression, etc.  I have seen similar laudable efforts on AMD for VMWare and other products.

         

         

         

         

         

         

      • #3196962

        Intel drives Apple sales up in 2006?

        by pheck ·

        In reply to Intel drives Apple sales up in 2006?

        Perceptions, perceptions, perceptions.

        That’s what will drive Apple sales. By aligning their CPU technology with the recognised mainstream provider (Intel) they are just following through on the long term marketing strategy that started with the Switch campaign (probably even before). The avergae Joe in the street is going to go on image. If the box has an Intel Inside sticker and has been advertised that way, it’s another known quantity. The styling will appeal, the price point wno’t be that different and it will run all the regular apps – It Just Works. The GUI won’t be too much of  a challenge to after that, if at all. Even Joe average knowns that there are several flavours of GUI out there now.

        I believe that Apple sales will go up, maybe not in leaps and bounds, but in a steady ramping up. We’ve seen the iPod halo and I suspect that there will be an Intel halo.

        The tech transition will be just scenery on the side of the road.

        Paul H

    • #3124862

      Security Minded

      by justin james ·

      In reply to Critical Thinking

      As I was setting up FTP access for our clients, I got upset, once again, at how we handle our policy regarding customer access. One of our customers does this right: each person in our company who needs to access their network has an individual username and password. My boss, and most of our customers, just want a single username/password for anyone in their organization to use. Already, this gives me a stomach ache. The idea that someone who does, or used to, work for our customer would have access to our network, with me having no way to block them by IP address, or be able to tie a login attempt to a particular individual is a frightening thought to me.

      To make it worse, a number of the higher-ups got upset at the passwords that were being assigned. “They won’t be able to remember these passwords!” was the main complaint. Well, Windows 2003 won’t let me assign easier passwords, unless I change the default policy. I’m not going to do that. Our customers have been accessing our systems with username/password combinations like “companyname/companyname” for far too long, as far as I am concerned. We process sensitive data, such as sales figures. As it is, I am considered an “inside trader” as far as blackout periods and whatnot are concerned, because I have access to all sorts of raw data, and that data in turn becomes finished reports. It is extremely important that we safegaurd this data to the best of our abilities.

      My stance is, if our customers cannot be bothered to learn a complex password to protect this data, they need to get out of the industry. NOW.

      This entire situaiton got me thinking back to some of the other security faux pas that I have witness during my time in IT. Here are a few of my personal favorites:

      • A major US bank outsourced its network management. Us peons at the Third Party Vendor had a habit of writing down router usernames/passwords. Even the HQ routers used the same usernames/passwords. Sure, it was all VPNed with IP address filtering, so no one outside of the “green zone” could access their routers. At least not over VPN. Sadly, all of their routers had either ISDN or dialup failover interfaces. The ones with dialup interfaces could also be accessed by dialing in, for troubleshooting purposes. There was no differentiating what you could do based upon which interface you came in on, so in theory, anyone with one of those phone numbers can dial in to one router, telnet to a HQ router, then access then entire network from there. It would be trivial to shut down the entire bank’s network of ATM machines and branches in a matter for an hour or two with a well written script or program that can dial a modem. With no accountability, other than the phone number from where the initial call was placed. Oh yeah, did I mention that they have never onced changed their passwords, and that the passwords, as well as all troubleshooting information are freely available within the TPV’s intranet to all who want to see it?
      • At one company I worked for, we had a massive Solaris server running HP OpenView. Sadly, HP OpenView is probably the worst engineered piece of software ever to be sold outside of the $10 “Instant Website Maker!” section at Best Buy. It is a testament to Solaris’ abilities that the server kept running, because HPOV was leaking memory like a torpedoed ship leaks water. Here’s where we had a nice little hole in our security: because HPOV was such a steaming pile of garbage, every user had the root password, so they could kill and restart HPOV. You might as well just make everyone a root user, at that point.
      • Growing up and learning COBOL on an ancient system, what a fun time. too bad we were all umasked so all of our work was chmod’ed to 777. It’s not hard to cheat or destroy someone else’s project (as happened to me, someone changed one of my PIC statements, and it took me two months to find out why my final project never quite seemed to work right) when hacking is a matter of “cd ~username”.
      • File permissions are a favorite security faux pas of mine. A company I used to work for (let’s just call them “a Fortune 500 company who’s former CEO now runs one of the top three computer makers” and leave it at that) used network storage space for all sorts of important documents. A lot of these documents, frequently pertaining to things such as the status of sales to customers, layoffs, offshoring, outsourcing, contractor conversions, employee pay rates, and so forth were typically created with read permissions to the “Everyone” group, because some nitwit sys admin had turned on inheritable permissions (fair enough) but set the top level permissions too loosely.
      • Microsoft Indexing Service = “Intranet That Can Find The Documents I Am Not Supposed To Find”. Just do a search within many corporate intranets for phrases and words such as “layoff”, “consolidation”, “India”, “sexual harrassment”, and so forth, and you find all sorts of embarassing things that were never explicitly linked to. What a great thing it is when “out of the box” defaults, mixed with ignorance on the part of a sys admin or a user, can result in anyone within the organization finding what should be restricted to a few top officials. It’s even funnier when the mistake occurs not on the corporate intranet, but on the company’s public website.
      • Security through (barely) obscurity. This one always gets a chuckle out of me, when I see someone attempt to hide “top secret” information through such crafty ruses as HTML comments to hide the text, turning off the right mouse button via JavaScript, “burying” important information with a piece of Flash, and other easily found out methods. It’s especially funny when you find it unintentionally, like if you view the source code to a document to see how they got a nice piece of design to work, and find a database password in there. The folks who write their code like this are typically pretty shoddy on their backends as well, you can usually hit them with a SQL injection attack because their super nifty search system is simply doing something like “sSQLStatement = “SELECT * FROM MY_DATABASE WHERE ID_CODE LIKE ‘%” + request.item(“search_query”) + “%”” or something along those lines. These same people often have really shoddy exception handling too, and let their database errors get sent to the client.

      This is just a very short list of some of the most common/pathetic security flaws I have personally witnessed. All it takes is for someone to be as curious as me, with less boredom-related motivations and more malicious motivations to exploit most of them. About half of these I found while simply looking for the information I needed to do my job, and finding myself in a “forbidden” directory, or seeing something else on the screen that caught my eye.

      Many of these are common mistakes on the part of end users that were enabled by poor systems administration. The vast majority of your end users have no idea that creating a directory for their supervisor in the common network area is going to expose that directory by default to everyone who wants to see it. Sys admins need to make clearly labelled “management only” directories, with the appropriate subdirectories for individual teams. Processes need to be clearly defined for systems operators at the low level for things such as department moves, new hires, employee termination, and so forth that ensure that users have only just as much access as they need. Group policies need to be put in place to disable USB ports from being used to hook up keychain drives, disable file transfers outside the corporate network over instant messaging, and so forth.

      Some of these are problems with programmers who simply don’t know better, or are too lazy to do better. These problems are trickier to find. There needs to be a rigorous code-review process in place, checking code for things like SQL statements without parameterization and whatnot.

      For both groups of people, sys admins and programmers, there needs to be a combination of education and discipline in place for slip ups. All it takes is for one “wrong person” to get a hold of a document to bring your company’s stock price tumbling, or have the SEC investigating, or any number of other problems. Why risk your company’s well-being?

      Unfortunately, systems administrators are often self-taught or trained in a haphazard manner. Too many people in IT have a certification with no experience to back it up. Programmers get cranked out of CS programs now with a lot of “this is how we do it today” knowledge, but little understanding of “why we do it like this”. All it takes is for one of these people with paper qualifications but no true understanding to have to work with a new technology or a different language to be in the dangerous land of ignorance. And that will sink your business. Beware, and conduct regular inspections to ensure that standards are met.

      J.Ja

    • #3078142

      Making peace with SaaS

      by justin james ·

      In reply to Critical Thinking

      I have been a long-time opponent of SaaS (Software as a Service) in general. Indeed, I think most of the “new ideas” in the IT industry are half-baked at best. But SaaS and thin clients (they go hand-in-hand) are nearly total bunk. Phil Wainewright (ZD Net blogger) and myself have been trading ideas back-and-forth lately regarding this topic. We both come at it from different perspectives, and there is a lot more agreement than one would think just by reading what we have written.

      Mr. Wainewright beleives that SaaS is quite possibly the way we will do most of our IT in the future. I beleive that SaaS is doomed to fail, except in certain small niche markets. He and I both agree on two things. The first, is that SaaS vendors need to do business in a better and more ethical manner than traditional IT vendors. The second, is that SaaS vendors, so far, are not doing so.

      But I also beleive that SaaS penetration will be limited for other reasons, not the least of which are technical.

      SaaS will not make large gains where the data goes both ways. It will be primarily a read-only operation. The reason for this is that smart companies with IT budgets of their own will not trust a third party vendor (TPV) with their data. It is not just a matter of whether or not that data becomes unavailable momentarily. What if there is a slip up on the TPV’s end that exposes one comapny’s data to another company? That is potentially a major catastrophe. In addition, there is the issue of importing/exporting the data should you choose to leave that vendor. Even if they allow you to export your data in some common and open format that the new vendor is able to import, how many SaaS vendors out there are going to provide you with a bulk exporting system? How will your system interface with that? A SaaS vendor is set up to allow small, discrete data transactions, not massive rivers of data. For these reasons, companies will tend to primarily use SaaS services where the data transfer is read-only.

      The Web is a miserable way of working on anything other than data which is easily put into a Web form. In other words, if the application requires anything other than textual input from the user, and input that is best displayed with standard Web widgets, then it probably won’t fly very far. In addition, binary data streams traveling back and forth over a network are really not a fun way of getting your job done. Imagine trying to use a photo-editing (or worse, video-editing) application over a network. In a typical corporate environment, bandwidth is at a premium. Any cost-savings generated by a move to a SaaS vendor will quickly be chewed up by bandwidth costs. Is a 3D Studio Max liscense really cheaper than a dedicated leased line per employee?

      SaaS will do well for applications that only a few users access, or that most employees use rarely but regularly. If more than a small number of users within an organization use a particular application, the economics of scale come into play for the customer. If an application is so important that a large portion of their employees are using it every day, the application becomes part of that company’s daily bread and butter, and as such it makes sense to bring it in house.

      SaaS will appeal mostly to small and medium sized businesses without a dedicated IT staff. For those companies where the “IT department” means someone in the office who knows to reset the cable modem when no one can get to the Internet, SaaS makes a lot of sense. For example, a small business that needs to have ODBC connectivity for small databases would be much better served by a company running Oracle on their end who gives them an EZ Installer CD that makes the right ODBC connections, and a web-based interface for creating new usernames and tables, than to have an Oracle server sitting in their backroom. It also makes sense from an financial standpoint. For a business big enough that they would need someone on hand who is either a DBA, or knows enough to fake it, SaaS’ing their database work simply does not make sense.

      Companies with locations in technological hinterlands will be well served by SaaS. Imagine a call center or a factory or whatever in the middle of nowhere. They do not need a highly educated population, and they save a lot of money by putting their facility in a backwater (lower cost of living, lower prevailing wages, employees have nowhere else to work, less unionisation, etc.). The company can choose to either become their own SaaS provider, with the applications hosted at a home office where they will be able to hire qualified personell, or they may pay a TPV to provide SaaS, particularly if it is an application which only the remote location uses. In all honesty, it is downright difficult to find competant, qualified, and experienced IT workers in the boondocks.

      SaaS will not be used for mission critical applications. The name of the game for mission critical applications is to reduce potential points of failure while providing redundancy wherever feasible. Email is considered mission critical, and therefore very few companies bigger than about twenty or thirty employees outsource their email servers. Once your website becomes a major part of your business, you either bring it in house, or at the least put a server in a co-location facility. It is one thing to have the nifty 3D map on your “how to get to our office” webpage be offline. It is another thing for the entire website to be offline. The data link between the SaaS vendor and yourself is a giant point of potentially dangerous failure. The last situation you ever want to be in is for a down telephone five miles away to put your entire company out of business for a day. It does not matter what promises the vendor or your carrier or whoever makes to you: stuff happens that you cannot prevent. You can only work to minimize that possibility. An SaaS situation multiplies your possibilities of disaster by a fairly large amount. Would you fly into an airport if you knew that the air traffic controllers were sitting five hundred miles away and communicating with the airport with VoIP? Neither would I.

      I think that SaaS will be best delivered in the form of appliances. When someone signs up for SaaS services, they already accept that it will be a black boxed operation. The customer has no idea what happens on that server; there could be a few trillion lightbulbs switching on and off in their data center instead of hard drives for all you know. Since the customer is already accepting a black box service, why not sell them a purpose built appliance? A totally sealed, rack mount box (or blade system, for bigger enterprises) that just plugs into the network, picks up a DHCP address, registers itself in DNS, and is ready to go? Even better, it does not need to be a web-based application. Because it is residing within the network, large amounts of data transfer, such as an application installation are not a problem. It could very easily have an installer run on the clients to install software from itself. It could run out to the vendor’s system periodically to fetch updates (and push them out to clients) and report usage information in the case of a per-usage billing situation. Alternatively, it could act as a web server or some other thin client server. It could even be running Citrix or Terminal Services. It doesn’t matter. The point is, if SaaS vendors sold an appliance that delivered the service to the customer, instead of having them need to interact with a vendor’s network in real time or nearly real time, the vast majority of the technical and business problems with SaaS will be overcome.

      Tell me what you think.

    • #3077367

      How search engines are hurting quality content sites

      by justin james ·

      In reply to Critical Thinking

      Jakob Neilsen has posted what I beleive to be an extremely import article, which discusses the effect that search engines have on revenue for commercial websites. Mitch Ratcliffe at ZD Net also has a good blog up about the need to pay people for their contributions to websites.

      How are these two ideas connected?

      People do very little without motivation. Money is a great motivator. What Mr. Neilsen’s article is pointing out is that it is getting increasingly difficult to make a profit on the Internet. Quite some time ago, search engines replaced DNS as the way people find sites. Now, to get found in search engines, it is requiring an increasingly larger amount of money to be fed into the gaping maw of per-click search engine advertising. When Yahoo! first started their paid review system, it was acceptable. You paid once, and that was that. Users could find your site. Now, you need to pony up cash each time someone comes to your site, unless you are lucky enough to be in the first page of the organic search results.

      With more and more websites seeing visitors come directly into one page, and not leaving that page, they need to monetize their website on every single page, and make enough money for every single page to pay for that expensive per-click advertising. If a user does not find what they were looking for on the page that the search engine sent them to, they go right back to the search engine. Your site’s “stickiness” is no longer important.

      Unless you are selling a product with a great profit margin, you are in big trouble. Content websites do not have a great profit margin on a per-visitor basis. Take a news-related website. Let’s say they make five cents per page view, on average, from advertising. They can’t very well be buying their hits for a dollar each, can they? In other words, producing content online is increasingly less profitable, thanks to search engines.

      The end result, I beleive, is that many if not most professional websites will go under. It is already happening to many newspaper websites. Their print sales are being ruined by online news, and their online sites are being forced to have paid subscriptions or to lose money. Users are increasingly going to blogs, wikis, and other amatuer websites for their news. I will admit, I have always been predjudiced against blogs, wikis, and other “community created” websites when it comes to objective (or “as objective as possible”) information. Why? Because most of them are not making money and have no editorial control. Blogs are primarily done as vanity projects, for someone to put their thinly disguised opinions up as “news” with links to like-minded blogs as “proof” and solicit comments which stroke their egos. Wikis are great examples of groupthink, which a bunch of like-minded people democratizing Truth. When this replaces professional, reletively objective websites, we are in trouble.

      There is a solution out, and that is for the content websites to find ways to generate traffic in a way that works around the search engines. RSS and traditional email newsletters are one way; you need only get the user to your site once through a search engine to get them to subscribe to your RSS feed or email newsletter. Search engine optimization is another path, as it allows you to get traffic through organic search engine results rather than paying for the placement. As Mr. Neilsen points out, increasing website usability is another solution: multiply the amount of money you make per visitor enough to offset or exceed the increased costs of getting the visitor, and you’re in good shape. There are lots of ways. Confederations of content sites are another idea; it is probably much easier to get someone to pay a subscription fee (even a higher one) to have access to a group of websites (or better yet, incofrmation from a number of websites aggregated into one site) than it is to convince them to pay a fee to a number of different sites. For example, I would be very willing to pay, say, $100 a year for access to The New Yorker, The Economist, The New York Times, The Washington Post, and say, Encyclopedia Brittanica than I would be to give each one of these sites $10 per year. Indeed, I would love to see content websites bundled up like cable TV packages.

      In any event, this is not some Chicken Little, “sky is falling” scenario. This is actually happening right now. Try buying space on Google; prices are going up, much faster than the profit margins of any product that I am aware of. The entrepenuers who try breaking into business online are going to find that their marketing costs are a lot higher today than it was five years ago. Doing business online is not as cheap as it used to be, and the inexpensive nature of online business was a major driver behind the Internet’s explosive growth to begin with. The Internet reminds me more and more of the California Gold Rush, where the people who made the real money were the people selling picks, shovels, provisions, etc. to the prospectors. Search engine advertisement is now more critical than ever, and sadly, it is now a recurring, variable cost directly tied to the number of customers you have. Imagine if a store in the mall had to pay a fee to the mall for each person who walked into their doors, instead of a flat fee for rent each month. That is where we are headed, and it totally changes the game. Or worse, if everytime someone resolved your IP via DNS you had to pay a fee. Because more and more, that is what search engine advertising is looking like.

      Tell me what you think.

      • #3258134

        How search engines are hurting quality content sites

        by librarygeek ·

        In reply to How search engines are hurting quality content sites

        Hi, My work is focused upon the organization of information, search & retrieval — so this topic is right up my alley! To be clear — search & retrieval looks at how people look for (search) information and then read, view, and/or use it. Search does *not* just focus upon search engines.

        You said:

        >People do very little without motivation. Money is a great motivator.

        There are, however other means of motivation. Observe the open source software community for one of the best examples of reputation and recognition as motivators. The thing that the big media doesn’t seem to “get” is that they need to motivate readers to read them. Why should a read bother to read that particular site? Print is having a very difficult time grasping and adjusting to the new paradigm. They are not used to engaging in conversation. However, the fabulous aspect of the web is the conversations via links, text and reuse. However, a business *is* motivated by money.

        >  Now, you need to pony up cash each time someone comes to your site, unless you are lucky enough to be in the first page of the organic search results.

        Here is the key! You need to provide *quality* content. You need to have content that is engaging, current, and provides added value that I cannot get elsewhere. Have you noticed how often you can visit newspaper sites and see articles that are virtually identical. Too often, they are pulling them from newswires with little or no additional investigation, follow up or new angles of their own. Businesses that provide content as their business should not need to hire a search engine optimizer for help. Nor should they need to purchase search results. In addition to usability, they need to open their content to search engines. Many already follow your reccomendation of locking users in via subscription. The problem is that they often lock out search engines — thus knocking them off of the search pages. When you lock down content, trying to get people to pay for it — you also lock out potential readers. I don’t pass a link along to someone where they have to subscribe.  Providing a link to an article is much like passing along a clipped article. But — it’s even better since the new viewer might start reading other areas of your site!  Locking down content reflects a business caught in a print paradigm — those businesses are dying. It is a fact of life in a capitalist society that one must adapt to survive. Every media innovation has eliminated those who could not adapt and enriched those who learned to use them (think of  radio, tv, vcrs).

        ~Library Geek

    • #3133083

      The Lone Wolf IT Guy

      by justin james ·

      In reply to Critical Thinking

      I just read a pretty good article on CodeProject (http://www.codeproject.com/gen/work/standaloneprogrammer.asp) about how to be a successful programmer when you’re the only programmer at a company. The suggestions in the article are all good. I am in that situation as well. Not only am I the only experienced programmer in my company (there are other people there who write code, but on a very limited basis, and nothing very in-depth), but I am also the systems administrator.

      All in all, it is a pretty daunting task. If the servers blow up while I am facing a deadline to write an application… well, get the coffee brewing because it’s going to be a long night. Our customers have the luxury of having dedicated IT people – here are the DBAs, over there are the programmers (sub-divided into Web dev folks, desktop application developers, specialized Excel/Access people, etc.), the sys admins are hidden in the data room, and so forth.

      In some ways, I envy these companies. What I would not give to have to keep flipping between Windows 2003 Enterprise Edition troubleshooting, FreeBSD troubleshooting, database optimization (let’s not forget, I get to run MySQL, Microsoft SQL Server, and Oracle, to add to the confusion), and programming in a hundred different languages – half of which seem to be VB variants to keep me on my toes at all times.

      The confusion can be pretty funny sometimes, especially when I am multitasking. I recently told a customer to try “telnet’ing to port 443 to check for connectivity” when I meant to tell her to “comment out the if/then block” because I was troubleshooting an SSL problem on my server while helping her troubleshoot our code over the phone. Another classic is when people ask for a piece of code advice, and I give them the right answer… in the wrong language. Too many times, I have crafted a great Perl-ish regex for someone to elegantly solve their problem in one statement, only to remember that they are using VBA (or worse, SQL).

      The situation has its rewards, however. I get to build experience along parallell lines, for instance. I can honestly say that in one year at this job, I have “1 year Oracle, MySQL, and MSSQL DBA experience, 1 year VBA with Word, Excel and Access, 1 year Windows 2003 and FreeBSD systems administration, 1 year VB.Net, 1 year ASP.Net, 1 year blah blah blah…” If I was one of those specialised IT people, I would need to work for 20 years to get one year experience in so many technologies. Of course, I came into the job with plenty of experience in a lot of different things, otherwise I would not be qualified, but still, it’s great to get a wide variety of experiences all at once.

      On that note, the work is rarely boring. I don’t get mentally stagnant, and there is always something to do. If I am not working on a project, there is always some systems administration that need to get done. If I don’t have any internal projects to get done, my help is always welcome on someone else’s project. Do I get bored? Sure I do. But I get bored a lot less often than I did when I was a pure programmer, or a pure systems administrator, or a pure whatever.

      To all of the other lone wolves out there, my hat goes off to you.

      • #3253831

        The Lone Wolf IT Guy

        by a.lesenfants ·

        In reply to The Lone Wolf IT Guy

        Hello there,

        Just to tell you i’m in the same situation as you are.I’m only 25 years old and had the chance to be proposed,though i have not that much experience, the function of IT manager here..I jumped on it and took the challenge even if i wasn’t sure i could beat it!

        So here i am troubleshooting users, maintaining the network and our ERP,seeking and buying new material as well as devellopping and deploying applications…and all by myself with no one more experienced to help me or guide me facing a problem….my only buddy is in fact, the internet and it’s various blog,forums,sites where you can hope to find the right informations or some hints that will help you through..

        I make the exact same conclusion as you after a little bit more of 1 year in the buisness.It’s sometime hard to face problems,especially when you are in my case,but with this kind of job,you get the opportunities to devellop your skills in so many domain regarding IT and so fast that it is a real chance.It’s so great to have such a job like this one!!

      • #3091188

        The Lone Wolf IT Guy

        by wilrogjr ·

        In reply to The Lone Wolf IT Guy

        Another lone wolf here – you have to wear many hats and you are always busy. The weird part is peers at larger organizations not believing all of the things you do, have access to or just plain have experience in.

      • #3133692

        The Lone Wolf IT Guy

        by apotheon ·

        In reply to The Lone Wolf IT Guy

        Been there, done that — for most of my IT career. Okay, so basically for all of it. I’m sort of an IT renaissance man by necessity.

    • #3080725

      Email servers are a commodity. Email contents are not.

      by justin james ·

      In reply to Critical Thinking

      Note: this was originally posted as a comment (http://www.zdnet.com/5208-10532-0.html?forumID=1&threadID=17796&messageID=350047&start=-1) to David Berlind’s article Yes. You should outsource your e-mail

      Mr. Berlind is absolutely correct is quite a large number of his statements, and he does indeed provide a compelling arguement for outsourcing email. But he has made some mistakes, which is where we differ on this topic.

      The first number one problem here, is that Mr. Berlind’s original blog post is titled “Google to provide email hosting?” and is 100% about outsourcing your email to Google. I put forth the question “Now, let’s look at the premise: assuming I would outsource my email, why would I outsource it to Google, of all companies?”

      Mr. Berlind has not even addressed this at all. Not in the slightest. He provides a good (but flawed) arguement in favor of outsourcing. He does not even touch the idea that Google should be the one to do it. Mr. Ou adds a quick little list of why, even if outsourcing email is the right choice for your company, Google is not the one to do it (http://www.zdnet.com/5208-10532-0.html?forumID=1&threadID=17781&messageID=349545&start=-1). Admittedly, he did not go into nearly as much length or detail as I would have, but his comments really don’t need much explaining (except for the user interface bit; GMail is pretty decent as far as web mail goes, and is garbage compared to a desktop app, is a good way of putting it).

      Now, onto the topic of the current blog post by Mr. Berlind: “Yes. You should outsource your e-mail”.

      I say, “No. You should NOT outsource your e-mail”.

      “So, one question I have for Mr. James is, of all the stuff being outsourced today, what of it isn’t mission critical?”

      That all depends on the business, but I have not encountered a business that did not consider email to be mission critical since about 1998, if not earlier than that. Furthermore, the fact that companies *are* outsourcing portions of their business process does not mean that they *should*. I can think of a number of anologies to this, but this principle is best summed up by David Hume: “One cannot derive an ‘ought’ from an ‘is'”. If everyone in New York jumped off the Empire State Building, that certainly does mean that I should as well.

      “But aside from the handful of hand-built customized competitive advantage-driving systems that integrate messaging and email into their functionality, are any of us really that deluded to believe that insourcing something as basic as email can make us more competitive than the next company (setting aside those companies with real security concerns that can prove their insourced system is more secure than the outsourced one).”

      This is a correct statement on the technological level, but an incorrect statement on the business level. On the business level, it is not the email system itself that matter; if carrier pigeons were ferrying letters printed up on Guttenberg presses at the speed of light, businesses would use it. What matter on the business level is the contents of the email, that is the mission critical part of email. Email is the lifeblood of companies, having replaced to a large extent phones, couriers, postal systems, fax machines, and so forth. What is contained in an email is often of an incredibly sensitive or important nature. Furthermore, archived emails are a frequently a knowledge repository. There is a reason why my personal email archives reach back to 2000 (and would go back to 1996, if I did not have a lapse in judgement in 2000). My major disappointment with email is that the tools are still rather primitive for mining that data.

      Any outsourcing situation has to acheive at least one of two goals: better value, or less cost. There are two ways to handle outsourced email: one is the way most small business do it, to have an external host, and they pick up the mail via POP3 and store it internally. The other way is to have an external IMAP or Exchange server offsite, and leave the data there. In the first situation, all you have outsourced are two TCP/IP transactions, one for SMTP and one for POP3. There is no added value here. And there is no reduced cost. If this is all your needs are, get the cheapest server you can find, load a Linux or BSD on it, and load qmail. For $500 in hardware, and the recurring fees for DNS and domain name registration, you are providing the same level of service to your company that the external provider is. Heck, your existing Windows server (nearly every business larger than 10 employees has one now) comes with an SMTP and POP3 server on it. So outsourcing this level of email service adds zero value and costs a lot more. Use those, if you don’t feel like getting a second server. The second option also adds no value (again, you can put Exchange onto a server yourself) and costs more. Look at the numbers you quote from Centerbeam: $45 per user per month. In a company of 10 people, that is $5,400 per year. That is more expensive than a server with Windows 2003 and Exchange, plus a lot of data AND a backup solution! Go the open source route (didn’t you guys just blog about Scalix a day or two ago?) and you have enough money left over to buy every employee an XBox 360 for a bonus. Gee, that doesn’t seem like such a value at all, does it?

      “What we offer to do is the hard work for people that they can’t afford to do themselves.”

      I know this is you quoting someone else, so I am now arguing with them, not you. As I show above, anyone who isn’t working on the US Government’s budget can see the math problems here. How much does it cost to run your own SMTP/POP3 server? It takes what, a few hours to properly setup and establish a server using either *Nix with qmail or Scalix, or Windows with the built-in servers or Exchange? The Windows route is especially useful, because all of your account management is being handled via Active Directory, so that is one less system to learn. Is an outsourced server, regardless of what it is (POP3 or IMAP/Exchange) going to integrate with your in-house identity management system? I think not. What, you’re going to set up a PPTP connection to their system, do a trust delegation to their AD system and yours, just so you don’t have to manage separate usernames and passwords? Or would you prefer some awful webadmin system to go in and change stuff?

      “That covers desktop management (anti virus, backup and restore everyday, 24/7 800# dial up helpdesk, server management, email management, VPN services, etc.).”

      The in-house solution, except for 24×7 support, is still cheaper. Sorry.

      “All a banker wants is more bankers and salespeople on staff. They don’t want a Microsoft Certified Exchange Engineer on staff who is only available for one shift a day.   Even if you do run an Exchange Server with three shifts of engineers 7 days a week, they’ll be advising you on best practices such as backup and restore. They’ll say you need a Storage Area Network (SAN) and need to send tapes to Iron Mountain everyday.”

      This guy makes me forget just about every bad example I have ever given in a ZDNet TalkBack. Bankers have all of these things anyways. Bankers run a 24×7 database of millions if not billions of database entries where even a moment’s worth of downtime can cost millions of dollars. This organization is going to be unable to support an additional few servers for email? But let’s pretend he didn’t say “banker”. Let’s pretend he said “small business owner”. If having these hordes of MSCEs on staff is a problem for him, he may want to check out *Nix+qmail. I personally cannot vouch for Scalix (never used it, relatively new), but *Nix+qmail is a time tested, battle hardened system. It requires zero maintenance. None. Heck, Exchange, when properly configured, doesn’t need any maintenance anyways. And at the end of the day, what good is his elite commando team of MSCE’s going to do for a business owner if that business does not have someone on their end who can actually understand what they are saying and how to work with them? The only time his MSCE army is a decided advatnage is when their server starts behaving erratically and the problem is definitely on their end. If their software does something like that, where you need an MSCE to troubleshoot something that was working fine, then maybe that software isn’t very good.

      “[T]he point is that a leveraged model (where an outsourcing outfit spreads the infrastucture costs across more users than you can) is not only going to save you a lot of money, but headaces (sic) too.”

      Economics of scale is an idea I can buy into. But if they are leveraging economics of scale so well, why do they need to charge $45/user/month? Earthlink charges me $6/month for a POP3 only account. Is Centerbeam’s economics of scale really so bad that they need to charge nearly 8 times as much for Exchange services? Maybe I need to reconsider those Exchange servers at my company, and put my new BSD server there to task with the email duties, the idea that Exchange is 8 times more expensive (with economics of scale, so it must be a few dozen times more expensive for our 5 person company!) than basic POP3 is total hogwash. Economics of scale distributes the cost of a Windows & Exchange license to something like 25 cents per user. So they’re just ripping you off. Sorry, I don’t like to be rpped off, and neither does my boss.

      And what headaches are they really solving for me? Managing and maintaining an email server? It seems to me like they are giving me new problems, not taking away any existing ones. Let’s make a headache list:

      In-house:
      – Hardware failure
      – Network failure (immediate Internet connection and LAN only)
      – User maintenance
      – Initial installation and configuration
      – Backup/restore
      – Patching
      – Security (95% of this is a subset of install/config)

      Outsourced:
      -/+ Hardware failure (should be be a problem if they are doing their job right)
      – Network failure (their network AND immediate local Internet connection and LAN only, we’ve doubled our headaches)
      – User maintenance (compunded by it not being part of my local authentication scheme)

      So really, all I am giving up is responsibility for the hardware, backups, and maintenance. As I have already stated, email servers require nearly zero maintenance. I am patching my internal systems anyways, adding one more item to the list. Backups, again, I should be doing this anyways, what’s one more server to have dump to tape/SAN/NAS? And for this I would be paying $45/user/month, which is much more expensive than a single Windows server in a 10 person environment? And if I have a large company, I can apply economics of scale to myself. An Exchange server can handle 1,000 users without a problem. Can my IT budget handle $45,000 per month for email alone? That’s the cost of adding 6 MSCEs to my staff on a full time basis! And then I could have two of them monitoring my servers 24×7. Hmm, maybe I had better stop discussing Centerbeam’s business model before his investors pull out now. Or better yet, maybe I should call those investors and ask them if they’d be interested in this bridge I have to sell, it connect Brooklyn to Manhattan…

      “Raise your hand if you’ve used GMail, Yahoo Mail, AOL’s mail or HotMail because you needed to send mail but couldn’t get access to your corporate email system (for whatever reasons).”

      I’ll agree with you on this one. It’s happened to all of us. On the other hand, this is an apples-to-oranges comparison. If I had the money to pay for Centerbeam’s services, I would have the money for redundant email servers, in which case the only thing that would take me down would be a disaster (natural disaster, virus/worm, fire, etc.) or a complete network outtage, in which case my users would not be able to reach GMail, HotMail, etc. or Centerbeam’s servers. So once again, this flounders on the cost issue. And also remember, I’m comparing Centerbeam’s Exchange servers to in-house Exchange servers. If you compare a plain old outsource POP3 to an in-house *Nix server, the numbers are even more in favor of the in-house solution.

      “Email systems, as it turns out, aren’t that easy to run 24/7.”

      I have been doing it for years. The only thing that ever goes wrong are things that take down an entire server, or the whole network. Again, the price of in-house vs. outsourced makes this point hard to argue.

      “Lastly, for a commodity system like email, what leverage do you have over your certified email engineer to keep the email systems up and running 24/7?  His or her job? Oh, that’s what you want.  You’d rather spend time hiring and firing email engineers than making money for your company? Service Level Agreements (SLAs) are a lot easier to negotiate and enforce with service providers than they are during an employee’s annual review.”

      Ah yes, my most favorite topic in the world! I guess I *do* have to rehash this topic. OK, time to brew a fresh pot of coffee!

      First, some links to the extensive library of my thoughts on this subject:

      * http://www.zdnet.com/5208-10532-0.html?forumID=1&threadID=16070&messageID=318442&start=-1

      ^^^^^^^^^ Number one most important post about the subject

      * http://www.zdnet.com/5208-11406-0.html?forumID=1&threadID=16244&messageID=321399&start=-1

      * http://www.zdnet.com/5208-11406-0.html?forumID=1&threadID=16278&messageID=322734

      * http://www.zdnet.com/5208-11406-0.html?forumID=1&threadID=16324&messageID=324436&start=-1

      * http://www.zdnet.com/5208-10532-0.html?forumID=1&threadID=16070&messageID=318442&start=-1

      Wow! There’s a lot of real-world, real-life, in-the-trenches experience in those links!

      Now, to be fair, I don’t always think that outsourcing is always bad, indeed, I have presented a compelling business case for it under certain circumstances: http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=184332&messageID=1921068&id=2926438

      My direct response to Mr. Berlind’s statement. Have you ever worked someplace and had a hard time with the customer, and the boss pulled you aside and said, “look, I know you’re right and the customer is wrong, but we have to swallow our pride and give them what they want”? I have. That’s the way companies work, as long as it is profitable for them. As soon as giving the customer want they want is no longer possible, they say “no”. If a company cannot deliver on SLA (no measurability, no proof of failure, little enforceability, blah blah blah TPVs stink blah blah blah, just some self deprecation there at this late hour), you are tied to them for a contract. And what are you going to do? Spank the CEO for being naughty? It isn’t like the underpaid, underexperience, fresh-from-working-at-McDonalds-but-know-how-to-setup-a-CounterStrike-server kids who fill Third Party Vendors are going to be held responsible if a customer is lost. I am a big fan of “The Buck Stops Here”. TPVs always manage to find a reason why it isn’t their fault, SLA wasn’t truly violated, etc. For a TPV, “The Buck Stops Here” really means “Your Money Ends Up In Our Bank Account”.

      Employee annual reviews are a lot easier to manage than SLAs. I can directly measure and manage my employee’s success. SLAs are notoriously difficult to manage. I have seen cases where a customer spent nearly as much time and money simply managing SLA than they did to manage the service themselves. That is rediculous. If you think SLAs are easy to manage and enforce, try an experiment: call your cable company to make a service call. You will get a 4 hour time frame where you must be home (heaven forbid if you’re in the bathroom when they come by, “The Cable Man Knocks Once” would be a good film), and chances are they will be late anyways. If you’re lucky, they’ll give you some excuse about it. I remember working for a TPV and being instructed by managers to “invent” weather conditions that caused SLA misses, since poor weather was an SLA escape clause. How much money will you spend just hiring lawyers to 1) write the SLA 2) help you get out of the contract when SLA keeps being broken 3) sue to recoup the costs to your business when SLA is blown? If I have a bad employee who makes a serious goof, I can take them to task, or even fire them if need be and replace them with a more competant person. If SLA is blown, there is no recourse.

      Finally, there is the issue of commoditization itself. Declining levels of quality are the largest result of commoditization, outside of pricing. Look at cars. The only reason why cars improved one bit after 1972, is being foreign competition started selling better cars at a better price in the 80’s. Before that, American cars had become commoditized to the point where they were all equally junk. Now, American cars are often significantly better than their foreign counterparts, because they were forced out of commodity status. Consumer electronics is another example. Even though cell phones are cheaper now than five years ago, I spend more on them because their quality stinks. My year-old cell phone has worse battery life now than my friend’s 5 year old analog phone. The worst thing that can happen, outside of a destructive monopoly, is commoditization. It freezes the desire to improve quality and replaces it with ruthless cost cutting to match the price cutting. When you cannot compete on features or quality because everyone is the same, then no one cares about it. Again, cell phones. At this point, consumers expect poor service, because that is the price we paid to save money. There are no “premium” or “luxury” carriers out there (well, Verizon is a bit pricey, and they do seem to have slightly better coverage, from my experience), but in general, cell phones stink. Why? Because with today’s price slashing, no one can afford to innovate.

      J.Ja

      • #3101473

        Email servers are a commodity. Email contents are not.

        by sparkin ·

        In reply to Email servers are a commodity. Email contents are not.

        Nicely thought out and balanced rebuttal.

      • #3100782

        Email servers are a commodity. Email contents are not.

        by joanne lowery ·

        In reply to Email servers are a commodity. Email contents are not.

        For outsourcing there has to be a break even point at which you would service inhouse rather than outsource. At $45.00 per month outsourcing might work for up to 10-20-30 employees. At some point though the prices start to equal the costs of inhouse service. You would need to add up the cost of hardware, software, support, disaster recovery, maintenance, user management.

        You would also need to compare the quality of service from the outsource. Do you still get groupware, public folders, scheduling, contact sharing etc.

        As for tech support, outsourcing that makes a lot of sense.  My company provides outsource support for a number of SMB clients. With IPSec and PPTP remote connections we can support most sites without even needing to leave our office.

        I believe outsourcing might be a good idea, but the cost justification needs to be real and not ideological.

         

    • #3101542

      Well, forget you too

      by justin james ·

      In reply to Critical Thinking

      I just spent an hour writing a blog post. Only to have my connection drop as I hit “Submit”. When the connection came back up, “Refresh” would not resubmit my entry. “Back” restored all of the form field entries EXCEPT the article itself.

      This is why hosted solutions stink. This is why on-demand stinks. This is why thin clients stink. This is why AJAX stinks. Because a desktop application would have been performing an automatic save to disk every few minutes. And none of those other applications get write to local disk priviledges, because that would be a gaping security hole. It has been well over ten years since I used a desktop application that could lose more than a few minutes worth of work in the event of system failure, network disconnection, etc. It was five minutes ago that an online “application” did this to me. What a complete and utter bummer.

      I have to remember to write these blogs in a plain text editor then copy/paste into this form to make this thing work right. It’s bad enough that every character I type, every single keystroke causes JavaScript (save me from JavaScript, please, the interpreters are slower than my aunt’s driving) to reevaluate the entire article and try to WYSIWYG it. which means that the more I write, the longer the delay between me hitting the key and the key affecting what is on my screen. All of this for a totally of 7 dinky buttons that do bold, italic, link, unordered list, ordered list, image insert, and “switch to code mode”, and a drop down that lets me choose a block type for the current paragraph. This is stupid. How about if instead, the system have a little timer, and wait until I either key a command key for the function, click the function button, or have stopped typing for a few seconds to evaluate it. Whould that be too much? Is it necessary for it to re-download 14, that’s right, FOURTEEN images everytime I type a letter? Thankfully I’m not on dialup, and as it is I feel like I’m trying to type via telnet on a 2400 connection. Heck, old BBS’s on a 2400 modem would do a screen refresh faster than this junk.

      AJAX, thin clients, etc. is is like going back to 1989 without the cool ANSI/ASCII art by ICE, The Jargon File, 256 color GIFs of women in bikinis, the DOOM 1 shareware installer, The Bastard Operator From Hell, Legend of the Red Dragon, 2600 magazine, Phrack, music in MOD format, and all of the other fond memories of my youth.

      J.Ja

      • #3088176

        Well, forget you too

        by apotheon ·

        In reply to Well, forget you too

        That’s not really the fault of AJAX. That’s the fault of poorly conceived AJAX. Granted, I’ve only seen a grand total of about three significant implementations of AJAX that weren’t poorly conceived, and the other couple hundred or so all ranged from mediocre-bad to downright heinous, but good AJAX is possible. It’s real. I’ve seen it. I swear.

      • #3089544

        Well, forget you too

        by superdisco ·

        In reply to Well, forget you too

        Well you really answered your own comment with:

        “I have to remember to write these blogs in a plain text editor then copy/paste into this form to make this thing work right.”

        Personally, I dont trust ANY web form to do the right thing, and therefore write everything in a text editor first, unless it is a six line comment like this.  Alternatively, just before I hit submit, incase I have timed out the session etc, if I have written staight into a form I quickly grab it with Ctrl+A and Ctrl+C then hit that button.

        That way, its a frustration free experience! 🙂

        superdisco (aka Karen)

      • #3085699

        Well, forget you too

        by somebozo ·

        In reply to Well, forget you too

        well google gmail does auto save when you are composing an email. And it does it transparently without any interaction or disturbance to the end user. Therefore its not the online applications which stink…the developers lack of mind who write them..

        usually i put my long posts in notepad and then paste it on the webpage text field when done…

        Notepad is a desktop application and it does not autosave so again u may be losing data if lets say power went off or system hanged..

      • #3263049

        Well, forget you too

        by wayne m. ·

        In reply to Well, forget you too

        I Share Your Frustration 

        I would diagnose the problem as being session timeout more than anything.  This has always been a major problem with every web based application that I have seen.  Neither the server nor the web client is aware of user activity.  I have seen numerous kldges implemented just so that a user can spend a reasonable amount of entering a thought.

        I also remember the telnet days when getting a 9600 Baud modem meant life was good.  VT100 encode screens displayed and updated faster than HTML even over a LAN connection.  If we advance any further, I think I’ll revert to faxes.

        Any way, I tend to avoid posting anything in these blogs unless I really, really want to enter it.  I have also given up when my first attempt fails, usually indicated by a long period of no action.  The only trick I have found is to copy the text box (Windows trick) before I submit, kill the browser when the submit fails, relogin and paste and post.  Thanks for letting me vent as well.

    • #3271579

      The sorry state of web development

      by justin james ·

      In reply to Critical Thinking

      UPDATE (2/28/2006): I’ve posted a follow up to this article that presents positive ideas on how to change this situation.

      Last night I read a great article (http://www.veen.com/jeff/archives/000622.html) from about 16 months ago about how lousy most open source CMS (Content Management Systems) packages were. While focused upon open source, the author made mention numerous times in the article and follow up comments that his compalints also apply to commercial CMS’s.

      Sadly, all of his complaints are still true, and apply not just to open course (or closed source) CMS’s, but to about 90% of the web applications out there.

      The simple truth of the matter is, web developers generally stink, not just as programmers, but as user interface engineers.

      Over the past year-and-a-half or so, I have spent countless hours installing, trying, and uninstalling literally dozens of various open source CMS systems, without once finding something that works right, if at all. The best one out there, for my needs, was WebGUI. Too bad it broke the moment I tried to upgrade it, Apache, mod_perl, perl, or just about any other dependency it had!

      Ten years after the Web revolution began in earnest, I still find myself using systems that are not much better than the systems I was using ten years ago. Part of the problem is the continual state of change within the Web development world. Every time a new language or framework or web server or technique (like AJAX) or whatever starts to gain momentum, all development on existing systems seems to halt, and everyone decides to do everything in the new system. By the time the new systems are about as good as the old ones, another technique, language, or whatever seems to come out. By the time the server-side Java and ASP web apps got to be as good as the CGI/Perl they were replacing, .Net and PHP came out. Now that .Net and PHP apps are getting as good as the Java and ASP pages they replaced, .Net 2.0 and AJAX are suddenly the rage.

      The fact of the matter is, if all of that time had spent spent making things work in CGI/Perl (or whatever system had come first), I might have a chance of finding a quality web application.

      AJAX is the current fad. It seems to be predicated on the fact that since Google Maps are so good, and they use AJAX, that AJAX should be used everywhere. Here’s the truth:

      • Any thing you do with AJAX also has to be written server side, because otherwise your application will not gracefully degrade on a browser without JavaScript or JavaScript turned off.
      • The tools to write and debug JavaScript, ten years or so after it came out, are atrocious.
      • Different Web browsers still do not agree 100% how to render HTML and CSS, nor do they implement JavaScript identically, forcing you to stick with either HTML/CSS/JavaScript that renders and executes identically or “good enough”, or cut yourself off from a significant portion of visitors.
      • People are using XML for something is simply was not designed to do. XML was designed to be used in such a way (in conjunction with XLST, XSD, and UDDI) so that systems could automatically discover and use each other. Thus, XML is written for the “lowest common denominator”, which makes it extremely wasteful, in terms of resources needed to create it, transmit it, and consume it. People are using XML to pass data back and forth between different parts of their code (server-side to client-side, and vice versa) because it is quick and easy for them to code that way. The fact is, their applications are significatly slower, both server-side and client-side than they need to be because of this. It is much quicker, if you know the data format, to pass fixed length or delimited data back and forth, and nearly as easy to write the code. But because programmers are lazy, they would rather save 30 – 45 minutes of code writing, at the expense of creating scalability problems (just compare the file size and parsing time of XML vs. CSV to get an idea of what I mean).
      • JavaScript interpreters are incredibly slow. I recently worked on a Web application where the customer wanted some customer validation done client-side on a 3,000 record, 3 field data set. The web browser would lock up for a minute on this. Actually sending the data to the server and handling it there was faster by a factor of about 10. I don’t consider that “progress”.
      • HTTP is a connectionless, stateless protocol. In other words, it is pretty much so useless without a zillion hacks laid on top of it to accomplish what a desktop application can do with minimal, if any, coding. Look at the design of application servers. They need hundreds of thousands of lines of code, and basically all they do is receive GET/POST/etc. data, pass the appropriate information to an interpreter or compiled software, then return the results. In the process, they perform validation, maintain connection states (either via cookies or session ID’s in the URL, both of which are hacks, when you think about it), and so forth. This is utterly rediculous. Desktop and server software is using Kerberos and other harded authentication management systems, while Web applications are sending plain text, occassionally protected with SSL. Is this really the best we can do?
      • Web developers are still clueless about interface design. Half of the problem is that a Web developer is frequently forced to work with some sort of graphics designer who was brought up in the print world. Sure, bandwidth is cheap now. But with all of the hoops that an application server jumps through to process each result, a significant portion of the response time of an application is dependent upon how fast the Web server can build up and tear down each connection. An application that increases the number of HTTP requests, regardless of how small they are, is an application that won’t scale well. AJAX goes from “make an HTTP transaction with each form submission” to “make an HTTP request with nearly every mouse click”. I don’t call this a “Good Thing”. I call this stupidity. AJAX multiplies, quite significantly, the amount of data going to/from the servers, switches, routers, load balancers, the whole architechture. All in the name of “improving the user experience.” At the end of the day, “user experience” is determined less by what “gadgets” are in the software, and more by “how well can the user accomplish their goals?” A slow application doesn’t “work”, as far as the user is concerned, regardless of the features it has.
      • AJAX “breaks” the user’s expected browsing experience. All of those cute XmlHttpRequest() statements don’t load up in the user’s web history. That means that the “Back” and “Forward” buttons don’t work. If you’re providing the user with an interface that looks like a normal Web page, with “Submit” buttons and so forth, breaking the browser’s interface paradigm is a decidedly bad idea.
      • And last but not least, JavaScript (as well as Java applets and Flash, for that matter) do not get local disk access. Their only recourse, if they want to save the progress of your work, is to periodically submit the work-in-progress to the web server.

      Has anyone actually tried to make something basic even work right? It sure doesn’t seem like it. TechRepublic’s blog system is a great example of how even basic JavaScript can create a lousy user experience. With every keystroke (and mouse click within the editor), it re-parses the entire blog article. It also refreshes the simple buttons at the top. In other words, with each key I press, it is making 14 (yes, FOURTEEN) [correction: 28!] connections to a web server. That is patently rediculous. This should be re-written in Flash, or dumbed-down so it doesn’t need to do this.

      In fact, just about the only AJAX applications [addendum: I’m talking about just the AJAX portion of the functionality, I’m not particularly impressed by Google Maps’ results] I have seen worth using is Google Maps and Outlook Web Access. The rest seem to make my life more frustrating than whatever problem they thought to solve.

      Google’s success is a great example of just how lousy web applications are. Google Maps took off like a rocket, because their competitors, despite having five, six (or more) years lead on them, had wretched interfaces. Mapquest hadn’t seemed to become any more usable since Day 1. Yahoo was making changes, but came out with them after Google Maps did. Same thing for search. Sure, Google’s results were (and still are, but less and less so) better than their competitors. But their interface is a joy to use [addendum: this is rapidly changing as Google becomes more of a portal]. GMail’s biggest “feature” isn’t even its interface, it is the amount of storage space. All of a sudden, you can use Web-based email with much of the benefits of a traditional POP3 client. Outside of the storage capacity, GMail wasn’t much different from Hotmail or Yahoo Mail.

      Google cleans up because they find a market where the current market leaders have a great idea, maybe even great technology, but provide a lousy user experience anyways. The fact that Google can break into an extremely mature market and blow it wide open is proof that Web applications, by and large, stink. Because even with five, ten years of market domination, the original players still provide a lousy customer experience.

      And at the end of the day, even the most basic network aware dekstop application is easier, faster, more secure, blah blah blah, better in every measurable way than the best Web application.

      J.Ja

      Want to see who’s next On the Soapbox? Find out in the Blog Roundup newsletter. Use this link to automatically subscribe and have it delivered directly to your Inbox every Wednesday.
      Subscribe Automatically
      • #3272555

        The sorry state of web development

        by zging ·

        In reply to The sorry state of web development

        Good argument, with valid points (especially AJAX).

        BTW Flash can save on the local computer (in it’s own little space).

        How about finishing your article with some ideas/direction that developers can take to improve on the points you’ve made? It’s easy to right a critique, but not a solution!

        Also, “web developers generally stink, not just as programmers, but as user interface engineers” I think that a major amount of developers will strongly argue this, especially as web development is changing what “programmer” means. Also, how good is your average ‘programmer’ at interface design? I’d guess that you’re average ‘developer’ has a lot more idea of what they’re doing with interfaces than programmers!

      • #3272554

        The sorry state of web development

        by jaqui ·

        In reply to The sorry state of web development

        TechRepublic’s blog system is a great example of
        how even basic JavaScript can create a lousy user experience. With
        every keystroke (and mouse click within the editor), it re-parses the
        entire blog article. It also refreshes the simple buttons at the top.
        In other words, with each key I press, it is making 14 (yes, FOURTEEN) [correction: 28!]
        connections to a web server. That is patently rediculous. This should
        be re-written in Flash
        , or dumbed-down so it doesn’t need to do this.

        Actually, a bad idea, Flash should never be used for anything other than advertisements.
        site functionality should be in lowest common denominator, no clientside scripting. after all, the browser may not support the feature you are trying to use so it doesn’t work for those people.
        [ I refuse to install, or enable any clientside scripting. I don’t get the flicker in the TR blog you talk about. ]

        The way to look at it, if it requires clientside scripting, including javascript, then the website contains nothing I need to see or get.

      • #3088177

        The sorry state of web development

        by apotheon ·

        In reply to The sorry state of web development

        I’m not quite the client-side antiscripting zealot that Jaqui is, but it’s true that one of the major problems with web applications is that a lot of the time they don’t degrade gracefully. Here’s a hint for degrading gracefully: If, as a web developer, your application won’t at least run in Firefox with no extensions or plugins, and with Javascript and stylesheets turned off, then you’ve failed. Sadly, there are thousands of websites out there that fail that test, whether it’s because of Flash, Javascript, ActiveX, or nothing more than really gnarly standards-noncompliant CSS.

        Clean and simple design first, then bells and whistles if they actually enhance site functionality somehow: that’s what’s important in web development. AJAX can actually provide a great deal of enhancement to the interface, and it’s not always a bad idea, but there needs to be consideration for those who can’t or won’t use Javascript as well.

        Again, the first rule of web development should be quite obvious. Make sure it degrades gracefully.

      • #3088035

        The sorry state of web development

        by mindilator9 ·

        In reply to The sorry state of web development

        I’m gonna have to disagree with Jaqui. To say that Flash should only be used for advertisements is extremely myopic, to say the least. Just because you have no vision for Flash, or are not skilled enough to use it for other things besides banner ads (Flash is like a big GIF animator to you, isn’t it), does not mean its use should be restricted to your limited knowledge.

        The examples for AJAX given by the author nowhere alluded to it’s inherent solution to the ActiveX debacle. Sure there’s probably a better way to do that too, but if you don’t know what it is, I would just as soon stay silent on the issue.
        And finally, what is the point in comparing desktop based apps to web based apps? You start your tirade on bad web developers and their CMSs and end it with “And at the end of the day, even the most basic network aware dekstop (sic)application is easier, faster, more secure, blah blah blah, better in every measurable way than the best Web application.” How does that statement even qualify? I’m gonna skip “Why do I care?” and go straight to “Well, duh.” It kind of has to be. First, it runs on the local machine’s processor, not the server processor too, nor the transfer friction that goes with it. That’s a neat little metaphor meaning all the attributes of a web connection that slow down the information’s travels. Obviously desktop apps don’t have to wait for their requests to come back over hundreds to thousands of miles and various other weak links in the chain. If your point is that open source developers suck because their apps won’t run as well as desktop apps, then you have no point at all. Oh and btw, keep your eye out for the trend where Microsuck and everyone else does away with desktop apps and makes you subscribe to crap like Orifice (Office for the humor impaired) online.

        “Has anyone actually tried to make something basic even work right?” No, I’m sure that every web developer in the world made it their personal goal to write absolute crap and pass it off as gold. Not a single one of us wants to do anything quality in our work. But hey, being a developer yourself, you knew that. Dincha. Unfortunately I took away nothing of value from your post because the whole thing has the miasma of your negativity surrounding any statement that comes close to coherence.

        Good luck on your CMS search.

      • #3087904

        The sorry state of web development

        by zetacon4 ·

        In reply to The sorry state of web development

        I read this blog with interest, due in no small part to my long history of creating business applications hosted by the best browser technology of the time. It’s been a painful journey, but today, I can report a lot of good news to all the nay-sayers out there. I do not confuse a browser-hosted application with a standard public web page. The two have almost nothing in common.

        As a web developer and interface designer of many years, I can attest how difficult it can be doing a good job of this area of design. The latest capabilities of browsers like Firefox and Safari help the programmer design and implement some very pleasant interfaces. I am still amazed at the phobia against javascript for simple client-side interface enhancements. There seems to be a great deal of misinformation floating around about this tool. If you attempt to make your web page behave as nicely without javascript as with it, using only CSS properties and actions, you will end up standing on your head, and scratching it a lot too! And, still you won’t have the simple efficient interface you can implement with just a small amount of javascript coding.

        The one thing I would love to see is a browser that is completely as sophisticated as a desktop application. I think there is a need for loading of encrypted scripting from the server and running it as tokenized code within the browser. This will allow client scripting to run as fast as native coded desktop applications for the most part. The programmer could feel a bit more secure with his coding too.

        The second thing needed to allow us “programmers” to build mature, user-friendly interfaced applications within the browser is a truly universal and standardized DOM API. No exceptions, nothing left out. This API would be portable to any browser or other program needing a truly universal web-smart engine for networked data processing. Everybody’s browser would behave exactly the same because they used this engine, rather than re-inventing the interface all over again.

        And, finally, the one big complaint our blogger buddy mentioned was the look and feel of the graphic design and how easy and simple it should be to use an application or web page! (Remember, they ARE two distinct things). The issue will never be what slick development environment you are using, what flavor of script or other programming tool you happen to be using to create your masterpiece. The issue will remain how do all these tools and methods and environments work together to render a pleasant and truly useful human work tool. It’s a very complicated subject. We will probably never cease discussing, abusing, redesigning and improving on it. It will remain one of developers’ and programmers’ favorite topic.

      • #3088511

        The sorry state of web development

        by jaqui ·

        In reply to The sorry state of web development

        a web page that includes clientside scripting is:
        1) theft of computer processing capacity.

        2) unauthorised access to electronic resources.

        if users had to specifically enable javascript then they would be choosing to allow it.
        because the default setting is to have it turned on, it means the end user isn’t asked. and that does not mean implied consent.

        a web based application is completely different, and would benefit from fancy bells and whistles.

        but if a web page requires bells and whistles then it is extremely poorly designed, and not designed for what the web is for, access for all to information.

        I’m seriously concidering developing a browser that will show exactly how many websites are both stealing from people and not coded for security.
        no plugins, no javascript, no activex, no vbscript and if it’s not using ssl it displays the page with a red wash over everything.
        be a great tool for developing secure web pages and apps

      • #3088438

        The sorry state of web development

        by roho ·

        In reply to The sorry state of web development

        I have to say there is some truth somewhere in this article, but it is overshadowed by a lot of frustration.
        Over the last years a lot has been improved about web sites and web applications. This is still relatively fresh territory and progress is maybe slower than one would hope, but prgress is there. I agree that there are still a great many sites that are providing bad interfaces and showing that they are based on hyped but not understood technology.

        On Javascript: it is currently enjoying its second life. It came about in two different DOM flavors and was abused for alot of things (nice clocks floating around your cursor!) and it was considered dangerous for along time. Then it was decided that you should use it as little as possible, better not at all.
        Then the “unobtrusive Javascript” idea was born. Since then Javascript is slowly coming back in the mainstream. With Javascript you can get a better user experience. Client side validation can help a lot when filling out a long form. But developers must always implement server side validation.
        AJAX can also add to the user experience but is till very new and not yet of age. It still a big hype. Reason enough to hold off for a while and let the community develop a good practice for using it. There is much hype and experimentation around and many of these are fun and cool, but not really usable.

        I will just leave the other subjects like XML alone.

        My opinion is that it the Internet and the Web is slowly becoming better and better. But it has its growing pains. Overall, however, I think it is improving.

        The statement that

        web developers generally stink, not just as programmers, but as user interface engineers.

        is way, way out of proportion. My opinion is that slowly, but surely things are improving and there will always be some inconsistencies (browsers, DOM, CSS, etc) that will make building good stuff a little bit more of a challenge. And I do like a challenge, howver frustrating these can be at times.

      • #3088328

        The sorry state of web development

        by wayne m. ·

        In reply to The sorry state of web development

        Similar Conclusions

        I largely agree with J. Ja and feel the current state of web technology is ill suited for application support and despite the promises of easy distribution and maintenance for web-based software, most end users need to be dragged kicking and screaming from their client-server versions (and rightfully so).  I would suggest the following changes are necessary before we can establish full-fledged web-based applications: return to a session-based model, define a rich set of user interface primitives, combine presentation and business logic, and adopting a client-multiserver model.

        Session-Based Model

        As J. Ja alluded to under his writing about HTTP, the sessionless model has failed and it is time to return to a session-based model along the lines of Telnet.  In a distributed application, the vast majority of the processing power and short-term storage exists at the client, yet the sessionless model tries to push processing back to the server and transfer information back and forth repeatedly.  The user thinks he has an application session and the developers play tricks to simulate a session, everyone gets the need for a session, except for the HTTP standard.  The result is increased software complexity, increased processing requirements for the centralized resources, and lack of a graceful degradation path (if you would like more on this last item, just ask!).

        Rich User Interface Primitives

        The reason that things like JavaScript, ActiveX, AJAX, and Flash came into being is that the HTML primitives are far too basic to provide a full-featured user interface.  Instead, developers are forced to write custom augmentation to perform common, repeated tasks.  Why can’t we have a text box that has a label, formatting controls, spell checking, grammar checking, table formatting, etc built in?  Why can’t we have a number box that has a label, only accepts numeric characters, and restricts entry to a specified bounds?  Why can’t we have a date box that has a label, that calls a calendar, and can bounds check a date against the current date?  Why can’t we have a list box that matches against partial typed data (who was the idiot who decided to cycle based on retyping the first character)?

        Combine Presentation and Business Logic

        To provide a good user experience, applications need to provide timely feed back to the user.  We cannot have a batch mode concept of submitting a page to be processed, rather we need to apply business rules and indicate the results to the user as the data is entered and becomes available.  Although this seems to disagree with one of J. Ja’s recommendations, I would argue that this is the root cause of several of his issues.  Because of the attempt to separate presentation logic from business logic, presentation languages do not support the constructs to implement business logic creating the need for these language hybrids such as JavaScript and ASP script.  Because of the use of different languages, we have created tiers of developers with different skills.  Rather than logic being placed in the most optimal place in the architecture, it is placed based on the development language skills of the particular writer.  No wonder the code is disorganized.  It is this separation of presentation and business logic that prevents us from developing the rich user interfaces I described above.

        Client-Multiserver Model

        The next major advance in application development will be the service-based model.  The idea of breaking a single screen into several areas that are independently updated needs to be extended to have several areas that are independently updated by different servers.  The client needs to be the central point to consolidate and distribute information.  To do this through a central server fails to take advanage of the distributed processing power of the client machines.  This also leads to large scale code reuse allowing major functions to implemented once and pulled together where needed.  With a single server model, common functions are typically rewritten for each application’s server.  This model also allows us to start to create multiple custom role based applications instead of one size fits all general application interfaces.

        Summary

        The web-based model was largely an attempt to provide a friendlier version of FTP; a way to provide a verbose directory listing and download files without manipulating directory structures.  It does not provide an adequate framework for interactive applications and much of the inherent capabilities of Telnet and 3270 interfaces have been lost.  It is time to create a technology that provides the benefits of a web-based application in a manner that is actually usable.

      • #3088283

        The sorry state of web development

        by staticonthewire ·

        In reply to The sorry state of web development

        I think there are two main causes for the problems you describe with web-centric software.

        The first is that web-centric apps have a much lower bar for programmer
        entry than standalone or LAN-based apps ever did, and as a consequence
        you get a crowd of n00bs building apps. And of course, they’re making
        all the traditional and time-honored n00b errors. So on one hand, you’ve got amateurs
        with powerful tools. On the other hand, you have the complexity of the
        problem space. Face it – it’s harder to build a web-centric app than
        any single-client or state-aware LAN app.

        Your specific focus is on CMS apps, and boy, that’s a doozy. I’ve
        written more than one in-house CMS app myself, and this is a truly
        messy place to be. Just defining the problem space proved intractable –
        one character thought “content” was limited to the text and pretty
        pictures that would appear in a web page, the next included
        advertisements. One person included javascript snippets as data,
        someone else had a special category for code. One wanted cross-browser
        generation that would handle Netscape 2.0 and up, another was an IE
        fascist…. it was neverending.

        Most CMS systems deal with this by NOT dealing with this. They hand out
        a system that can do it however the user wants, they add a few
        templates that can slice the system a few different ways, and leave it
        up to the client. Which is probably the right way to go right now,
        given the singular lack of maturity for this market. But of course it
        makes for a fairly intractable and very fragile end product…

        I get why you’re p.o.’d, and I’ve been in the business long enough to
        have a vague grasp as to why these problems exist, but to be
        honest, I really don’t see what you expect. You position the discussion
        on the bleeding edge of a nascent technology and then complain that
        everything is unstable and in flux. What do you expect, given the real
        estate you’ve staked out for yourself?

        Software grows through evolution
        and versioning, and in every fresh arena to which software solutions
        are applied, you find that there is at first a short period dominated
        by a single form, followed by a “wild west” period, during which an
        insane proliferation of (often laughably unfit) forms takes place.
        That’s where cutting edge web-centric apps are, right now. It’ll all
        settle down in a bit, and the cutting edge of software will perform the
        usual random walk to some currently unexplored area of human endeavor;
        at that point, people will start discussions on the topic of how stodgy
        and unexciting web development has become…

        I thought Veen’s discussion was thought provoking in some ways, but
        also to some degree he was failing to take the landscape into
        consideration. But of course he admitted that he was being deliberately
        provocative and inflammatory, as he wished to spark a discussion. The
        most interesting thing I found in his page was the series of links ot
        various CMS systems… none of which satisfy me.

        I am currently tasked with developing yet another in-house CMS system; I can
        build or buy, it’s up to me. I have to be able to manage 16 different
        text document types, six audio types, four video types, and the usual
        slew of image types, with data residing directly in MySQL and remotely
        across a corporate network, I have six levels of user access, an ad hoc
        “project” editor that lets users create any sort of assembly they want
        of the data they have available in the systems, object versioning, and
        get this, they want boolean ops! I have to be able to apply fuzzy set
        logic to multiple projects and come up with a coherent content
        generation schema. Wish me luck…

      • #3087817

        The sorry state of web development

        by tony hopkinson ·

        In reply to The sorry state of web development

        A nice rant.

        First of all the WEB was not designed for applications and in fact still isn’t.

        The standards vacuum allowed a whole pack of vested interests to leverage their market goals into the technology.

        I completely agree about the development/degugging difficulties, but that’s really down to the lack of standards.

        Client side scripting, as implemented is a security nightmare. I don’t care how much it improves the experience. I’m completely uninterested in having my resources controlled by any 3rd party. They’re MINE.

        I completely disagree about XML, OK some sort of compression would be nice. But standardised interface between applications is what we are screaming for, so whatever mechanism used must have meta data or it’s a complete waste of time.

        HTTP is stateless, and should remain so. All the difficulties we are having is as a result of trying to attempt solutions that require statefull communications in such an environment. So why did we get here, someone saw an opportunity not to have to micro design application specific front ends !. Yet another attempt at the non technical type’s holy grail of an all things to all men, super duper, don’t need any skills , cheap arse software development.

        Went wrong again didn’t it, instead of needing no skills, you need ten or more.

        Instead of doing all things well it did no things well

        Instead of meeting all mens needs it met no man’s needs

        Is this situation new ? Well no?

        Sorting it out very simple, use the right tools for the job, or design an environment where the right tools to do the job can be created.

        You can hammer in a nail with wrench, but you’re likely to smash your thumb, bend the nail, break the wrench and end up with two pieces of timber that still aren’t joined together. And that’s when you’re careful.

        Regards Tony

         

         

      • #3087792

        The sorry state of web development

        by billt174 ·

        In reply to The sorry state of web development

        Let’s see we want one application that will run correctly and look the same across how many systems and in how many browsers. I’m sorry but what is the complaint. Were lucky there is even half the compatability that there is. With the competing business and the level of cooperation to make this happen we are lucky to have what we have.

        With desktop apps there is usually one technology one small set of designers and users.

        Web developers? Some folks seem to think that they are the grand designers of everything that goes on a site. What planet do they live on. On my planet I have managers, clients and graphics people who all know more then me and make sure I know this. After I redo the design of a site for the fourth or fifth time it’s a little difficult to find time to make the code work correctly. We have groups having meetings about what new technology will make us more productive. No one asks the question, is technology the problem or is too much the real problem?  I’ve been programming for over 12 years and the technology and expectations have come at a faster pace each year and it just doesn’t make a lot of sense to spend time and money on perfecting what’s here and now since it will probable be gone in a year or two. It’s business, just business.

      • #3087778

        The sorry state of web development

        by jlatouf ·

        In reply to The sorry state of web development

        There is a creative solution to every difficulty. As has been mentioned in some of these comments, the Internet itself has evolved in ways never envisioned by its original architects. It shouldn’t be surprising that creative individuals have overcome the obstacles involving “state preservation” and efficiency (“full page refreshes” and interactive content).

        Yes, AJAX does require a commitment to excellence and the willingness to persevere with a new paradigm in an industry that is lucrative enough using the ?old methodologies?.

        Isn?t this exactly the type of recipe for market capitalization and success that levels the play field and allows a “diamond in the ruff” to join the big kids on the block?

        This manure pile of ?pragmatic difficulties? allows the few and the brave to become pathfinders that will forever change the efficiency of Net.

        The term AJAX may be a little over a year old, however this author has been committed to, and excelling at, this paradigm for more than 5 years. I have developed and refined a technology base which I have called ?Portable Interface? (PI) technology which employs many of the same methodologies as AJAX.

        As cited in this blog, there have been numerous obstacles; however, the end result is well worth the effort and perseverance.

        In addition to the traditional websites built for clients, PI has empowered ?Global Online Graphical User Interfaces? which are user-friendly, information-friendly and network-friendly.

        From my perspective web development is only in a sorry state if creative individuals become content with the past and hesitate to move into the future.

      • #3087763

        The sorry state of web development

        by codebubba ·

        In reply to The sorry state of web development

        J.Ja,

        You do make some good points here. Web development is a mess in a lot of ways.  I still, myself, prefer doing client/server development.  The “web” as it is presently designed (requiring a browser environment for presentation) is, IMHO, a kludge.  I personally think that it would have made more sense to use the Internet as the underlying data transport layer but to implement the presentation outside the confines of a “browser”.  Still … that’s the model that we all, apparently, decided upon so at the moment I guess we need to make the best of it.

        The .Net model I’m seeing using WinForms while still providing linkage through the Web seems like it might be a good intermediate step in the right direction.  However, this particular technology is not yet mature … it will be interesting to see what the whole thing looks like in another 10 or 20 years (by the time I retire).

        And you’re right … what’s up with this blog page and the refresh of everything every time you type a character?  Yuck!

        -CB

         

    • #3088611

      What To Do About The Sorry State Of Web Development

      by justin james ·

      In reply to Critical Thinking

      What To Do About The Sorry State Of Web Development

      A commenter on previous article (The Sorry State Of Web Development) make a good point: I put out a lot of negativity without offering anything constructive in return. Well, I?m going to make rectify that mistake.

      Here is what I think needs to be done to improve the Web, as far as programming goes. I admit, much of it is rather unrealistic considering how much inertia the current way of doing things already has. But just as Microsoft (eventually) threw off the anchor of the 640 KB barrier for legacy code, we need to throw off the albatrosses around the neck of Web development.

      HTTP

      HTTP is fine, but there needs to be a helper (or replacement) protocol. When HTTP was designed, the idea that anything but a connectionless, stateless protocol would be needed was not in mind. Too many people are laying stateful systems that need to maintain concurrency or two-way conversations on top of HTTP. This is madness. This applications (particularly within AJAX applications) would be much better served with something along the lines of telnet, which is designed to maintain a single, authenticated connection over the course of a two-way conversation.

      HTML

      HTML is a decent standard, but unfortunately, its implementation is rarely standard. Yeah, I know Firefox is great at it, but its penetration still “isn?t there” yet. More importantly, while being extremely standard compliant, it is still just as tolerant of non-standard code as Internet Explorer is. If Internet Explorer and Firefox started simply rejecting non-standard HTML code, there is no way that a web developer could put out this junk code, because their customer or boss would not even be able to look at it. Why am I so big on HTML compliance? Because the less compliant HTML code is, the more difficult it is to write systems that consume it. Innovation is difficult when, instead of being able to rely upon a standard, you need to take into account a thousand potential permutations of that standard. This is my major beef with RSS; it allows all sorts of shenanigans on the content producer?s end of things, to make it “easy” for the code writers, which makes it extraordinarily difficult to consume it in a reliable way.

      When developers are allowed to write code that adheres to no standard, or a very loose one, the content loses all meaning. An RSS feed (or HTML feed) that is poorly formed has no context, and therefore no meaning. All the client software can do is parse it like HTML and hope for the best.

      JavaScript

      This dog has got to go. ActiveX components and Java applets were a good idea, but they were predicated on clunky browser plug-ins, slow virtual machines, and technological issues which made them (ActiveX, at least) inherently insecure. The problems with JavaScript are many, ranging from the interpreters themselves (often incompatible interpretation, poorly optimized, slow) to the language itself (poorly typed, pseudo-object oriented, lack of standard libraries) to the tools to create it (poor debugging, primarily). JavaScript needs to be replaced by a better language; since the list of quality interpreted language is pretty slim, I will be forced to recommend Perl, if not for anything else but its maturity in both the interpreter end of things and the tools aspect. Sadly, Perl code can quickly devolve into nightmare code, thanks to those implicit variables. They make code writing a snap, but debugging is a headache at best, when $_ and @_ mean something different on each and every line of code, based on what the previous line was. Properly written Perl code is no harder to read and fix than JavaScript. Perl already has a fantastic code base out there.

      Additionally, the replacement for JavaScript needs to be properly event-driven, if it is to ever be able to work well in a web page. Having a zillion HTML tags running around with “onMouseOver()” baked into the tag itself is much more difficult to fix (as well as completely smashing the separation of logic and presentation which I hold to be the best way of writing code) than having TagId_onMouseOver() in the

      The client-side scripting also needs the ability to open a direct data connection to the server. Why does an AJAX application need to format a request in HTTP POST format, send it to an application server which does a ton of work to interpret the request, pass it to an interpreter or compiled code, which then opens a database connection, transforms the results into XML, and then passes it back over the sloppy HTTP protocol? Wouldn?t it be infinitely better for the client to simply get a direct read-only connection to the database via ODBC, named pipes, TCP/IP, or something similar? If we?re going to use the web as a form of distributed processing, with the code managed centrally on the server, this makes a lot more sense than the way we?re doing things now.

      XML

      XML needs to be dropped, except in appropriate situations (where two systems from different sources that were not designed to work together need to work together, tree data structures, for example). Build into our client-side scripting native methods for data transfer which make use of compression, delimited and fixed width formats for "rectangular" data sets (XML is good for tree structures, and wasteful for rectangular data), preferably have that automatically negotiated between the client and the server, and we?re talking massive increases in client-side speed and server-side scalability. This would only add a few hours of development time to the server-side to code in, and would pay dividends for everyone involved.

      Application Servers

      The current crop of application servers stink, plain and simple. CGI/Perl is downright painful to program in. Any of the "pre-processing" languages like ASP/ASP.Net, JSP, PHP, etc. mix code and presentation in difficult-to-write and difficult-to-debug ways. Java and .Net (as well as Perl, and the Perl-esque PHP) are perfectly acceptable languages on the backend, but the way they incorporate themselves into the client-to-server-to-client roundtrip is current unacceptable. There is way too much overhead. Event driven programming is nearly impossible. Ideally, software can be written with as much of the processing done on the client, with the server only being accessed for data retrieval and updates.

      The application server would also be able to record extremely granular information about the user?s session, for usability purposes (what path did the user follow through the site? Are users using the drop-down menu or the static links to navigate? Are users doing a lot of paging through long data sets? And so on). Furthermore, the application server needs to have SNMP communications built right into it. You can throw off all the errors you want to a log, but it would be a lot better if, for example, a particular function kept failing that someone was notified immediately. Any exceptions that occur more than, say, 10% of the time needs to be immediately flagged, and maybe even cause an automatic rollback (see below) to a previous version so that the users can keep working, while the development team fixes the problem.

      Presentation Layer

      The presentation layer needs to be much more flexible. AJAX is headed in the right direction with the idea of only updating a small portion of the page with each user input. Let?s have HTML where the page itself gets downloaded once, with all of the attendant overall layout, images, etc., and have only the critical areas update when needed. ASP.Net 2.0 implements this completely server-side with the "Master Page" system; unfortunately, it?s only a server-side hack (and miserable to work with as well, as the "Master Page" is unable to communicate with the internal controls without doing a .FindControl). Updates to the page still cause postbacks. I would like to see the presentation layer have much of the smart parts of AJAX built in; this is predicated on JavaScript interpreters (or better yet, their replacements) getting significantly faster and better at processing the page model. Try iterating through a few thousand HTML elements in JavaScript, and you?ll see what I mean.

      The presentation layer needs to do a lot of what Flash does, and make it native. Vector graphics processing, for example. It also needs a sandboxed, local storage mechanism where data can be cached (for example, the values of drop down boxes, or "quick saves" of works in progress). This sandbox has to be understood by the OS to never have anything executable or trusted within it, for security, and only the web browser (and a few select system utilities) should be allowed to read/write to it.

      Tableless CSS design (or something similar) needs to become the norm. This way, client-side code can determine which layout system to use based upon the intended display system (standard computer, mobile device, printer, file, etc.). In other words, the client should be getting two different items: the content itself, and a template or guide for displaying it based upon how it is intended to be used. Heck, this could wipe out RSS as a separate standard, just have the consuming software display it however it feels like, based upon the application?s needs. This will also greatly assist search engines in being able to accurately understand your website. The difference (to a search engine) between static and dynamic content needs to be eradicated.

      URLs need to be cleaned up so that bookmarks and search results return the same thing to everyone. It is way too frustrating to get a link from someone that gives you a "session timeout" error or a "you need to login first" message, and significantly impacts the website?s usability. I actually like the way Ruby on Rails handles this end of things. It works well, from what I can see.

      Development Tools

      The development tools need to work better with the application servers and design tools. The graphics designers need to see how possible their vision will be to implement in code. They graphics designers will also be able to see how their ideas and designs impact the way the site handles that; if they can see, up front, how the banner they want at the top may look great on their monitor, but not look good on a wider or more narrow display, things will get better. All too often, I see a design that simply does not work well at a different resolution that what it was aimed at (particularly when you see a fixed-width page that wastes half the screen when your resolution is higher than 800x600).

      Hopefully, these tools will also be able to make design recommendations based upon usability engineering. It would be even sweeter if you could pick a "school" of design thought (for example, the "Jakob Nielsen engine" would always get on your case for small fonts or grey-on-black text).

      These design tools would be completed integrated with the development process, so as the designer updates the layout, the coder sees the updates. Right now, the way things are being done, with a graphic designer doing things in Illustrator or Photoshop, slicing it up, passing it to a developer who attempts to transform it into HTML that resembles what the designer did, is just ridiculous. The tools need to come together, and be at one with each other. Even the current "integrated tools" like Dreamweaver are total junk. It is sad that after ten years of "progress", most web development is still being done in Notepad, vi, emacs, and so forth. That is a gross indictment on the quality of the tools out there.

      Publishing

      The development tools need a better connection to the application server. FTP, NFS, SMB, etc. just do not cut it. The application server needs things like version control baked in. Currently, when a system seems to work well in the test lab, then problems crop up when pushed to production, rolling back is a nightmare. It does not have to be this way. Windows lets me rollback with a system restore, or uninstall a hot-fix/patch. The Web deployment process needs to work the same way. It can even use FTP or whatever as the way you connect to it, if the server invisibly re-interprets that upload and puts it into the system. Heck, it can display "files" (actually the output of the dynamic system) and let you upload and download them, are invisibly, the same way a document management system does. This system would, of course, automatically add the updated content to the search index, site map, etc. In an ideal world, the publishing system could examine existing code and recode it to the new system. For example, it would see that 90% of the HTML code is the same for every static page (the layout) with only the text in a certain part changing, and take those text portions, put them in the database as content, and strip away the layout. This would rock my world.

      Conclusion

      What does all of this add up to? It adds up to a complete revolution on the Web in terms of how we do things. It takes the best ideas from AJAX, Ruby on Rails, the .Net Framework, content management systems, WebDAV, version control, document management, groupware, and IDEs and adds them all into one glorious package. A lot of the groundwork is almost there, and can be laid on top of the existing technology, albeit in a hackish and kludged way. There is no reason, for example, for SNMP monitoring to be built into the application server, or version control or document management. The system that I describe would almost entirely eliminate CMS?s as a piece of add on functionality. The design/develop/test/deploy/evaluate cycle would be slashed by a significant amount of time. And the users would suffer much less punishment.

      So why can?t we do this, aside from entrenched ideas and existing investment in existing systems? I have no idea.

      J.Ja

      • #3089538

        What To Do About The Sorry State Of Web Development

        by dfirefire ·

        In reply to What To Do About The Sorry State Of Web Development

        It is easy to look at what the Web has grown into now, throw the mess away and start building it from the ground up again, in a clean way. But what is clean? Not everyone shares the same idea.
        On top of that is the fact that the Internet is a growing and evolving thing, where many smart guys look for ways to get their ideas working within the possibilities that are offered by (also evolving) browser and server developers. Who has the right to abolish a technology that offers maybe many developers a means to earn their living? Let’s say, a JavaScript developer? It would be like changing the real world stating: ok now, bicycles are too slow and not capable of long distances, let’s replace them… what consequences would that have?
        The Net is very much like the real world. It’s a market place, where the smartest or richest vendor get’s to lead the way, but sometimes a very good idea can make it.
        It would already be a lot easier should all browser developers stick to the W3C guidelines, but I can assure you: even when this “brand new perfect way of internet” should become reality, you will see that someone like Micro$oft will have a different implemetation. And off we go again!
        I strongly disagree on the fact that XML should go. Instead, everything should evolve (or has already) in this direction. Developers should only use XHTML from now on, and evolve to pure XML/XSLT. That way many sorts of data can be sent, and the DTD specs can tell how to process it. As for the server side technologies: there’s no unified taste, so let’s leave it up to the developers themselves. I am a strong Java advocate for instance, why should I drop JSP for the .NET approach?
        And the bottom line: who will develop this new internet, get astronomically rich with it, and in the end steer the whole world down it’s path? Looks like I am getting a D?ja Vu!

      • #3089519

        What To Do About The Sorry State Of Web Development

        by andy ·

        In reply to What To Do About The Sorry State Of Web Development

        Mmmm – client scripting in perl. 

        If someone asked me what would be the one best way to ensure that most web pages were a complete disaster, I would probably say “have people do web scripting in perl”.

        I agree that well written perl is perfectly acceptable, but in my experience that constitutes about 1% of what I’ve seen.

        Given that a majority web script is written by people who aspire to be Visual Basic programmers “when they learn a bit more about computing”, perl would be a total and utter disaster.

        Why not leave javascript as is for the masses, but add a standardised method of using todays best languages (ie Java, and C# if it’s cabable of being integrated to Firefox et al) in the client. 

        Why should I not be able to write client code in Java to make XML requests and process DOM objects in the browser?  Seems simple to me.

        Andy

      • #3087811

        What To Do About The Sorry State Of Web Development

        by kovachevg ·

        In reply to What To Do About The Sorry State Of Web Development

        My comments are grouped by section. I will use short quotes from the original for better reference.

        JavaScript

        “This dog needs to go” – No, this dog needs to be standardized and Microsoft should abandon JScript. JavaScript is well established in the existing browsers and replacing it will force a re-write of so many web pages. Do you really contemplate so much effort and cost?!!! Any CFO will stop you immediately. The CEOs may fire you immediately. Sorry, business logic. Purely technical people don’t run companies and they shouldn’t be allowed to because they are only fascinated by how cool something is, ana rarely, if ever, pay attentionto  how much it is going to cost and how long it would take to transform an existing business model.

        I completely agree with you about the onMouseOver() – it should be where it belongs, in the script section.

        Perl is not an OO language. JavaScript is. So the purpose of the two languages is very different. There is object oriented Perl but I am not so familiar with it and thus cannot give an authoritative opinion on this one.

        “Wouldn’t it be infinitely better for the client to get a direct read only connection to the DB via ODBC, named pipes, TCP/IP, or something similar?” – you are talking like a pure technologist. Remeber there is something called application framework. Multi-tier application architectures are designed for scalability. You cannot effectively scale with your approach – it presents a paradigm that makes things hard manage when there are thousand of connections. On the back end you have a cluster with a hundred servers and you have things like load balancing and transparent failover. I hope you see the shortcomings of your suggestion in this context. It would scarifice scalability and redundancy – both necessary for e-commerce. The users want 24/7/365(6) availability. Sorry, business rules make life complicated.

         

        XML

        “XML needs to be dropped, except in appropriate situations”. It is precises the appropriate situations that drive its existence. For example, a pharmacist can describe a recipe in XML – a recipe has a drug name, an authirzing signature, a date and so on. A pharmacist does not understand HTML and a custom description reperesents an appropriate situation from his stand point. By now you probably see what I am getting at – appropriateness is highly subjective. XML allows you to define “unexpected” layouts for documents. Then they can be translated to HTML through CSS and other formatting, but also PDF, TIFF, etc.

         

        Application Servers

        “Event driven programming is nearly impossible. Ideally, software can be written with as much of the processing done on the client, with the server only being accessed for data retrieval”. Now you are talking FAT CLIENT. A Web browser is normally a thin client. Processing is left to the server – a powerful machine that can handle complex tasks. Have a look of the disadvantages of fat clients. Wikipedia should do it. In general they are good as programming tools, like .NET tool that allow you to drop object (VB programming as well). But for simlpe web presentation they are inadequate.

        “Any exception that occur more than, say, 10% of the time needs to be immediately flagged, and maybe even cause an automatic rollback”. No, no, no, you cannot do that on a production system, esp. in e-commerce applications. You potentially take out competitive advantages of the business model. If two companies offer you the similar products, competition is based value provided by additional features. Say, you buy books from Amazon or Barnes&Noble. Prices are the same. If one of the sites indroduces a feature to allow you to shave off 5 min of average purchase time, where would you go? … I thought so. Now imagine you roll this feature and therefore this advantage back – you will stampede the user because he will think the features is still avalable and will have to relearn the old ways. And for users who never used the old ways it will be a pain and your company will get a thimbs-down because the quality of user experience dropped – see CRM for more details. But again, you think like a technologist, not a business analyst. The majority of users are not technically savvy. I used to think like you. Then I took a Human Factors course and my views on what technology should be will never “roll back”.

         

        Presentation Layer

        Updating a small portion of the page is a great idea – it reduces net traffic and speeds up response time as perceived by the users!!! Thumbs up.

        “Try iterating through a few thousand HTML elemets in JavaScript”. You are not supposed to have thousands of HTML objects residing on a single HTML page. If you do, then you did not design it right and you overwhelemed the users with content that should have been spread across several pages. It goes back to design – more is not more, just enough is more. I can’t remember the name of the famous designer who said that but he is a renaowned expert. Such massive programming should be done on the server-side wehre you have the processing power and the complexity control to do ?t effectively … with Java or other objects, NOT html objects.

        “It also needs to be sandboxed, a local storage mechanism where data can be cached … This sandbox has to be understood by the OS to never have anything executable or trusted with in it …”. Here you imply browser-OS integration. A browser is just another application and even if it were integrated with the OS to that extent, I could install my own, recompiled version of the browser and do Black Magic. So, a noble idea, but with formidable practical implcations. How many browser vendors can pursuade to do that? Open source will never play well with Microsoft. Again, Human Factors are at play.

         

        Developement Tools

        “The graphic designers need to see how possible their vision will be to implment in code”. That would be wonderful, but they cannot do it. Graphic designers are people of art – they see things in colors, lines, shapes, paint, etc. They don’t udnerstand HTML, JavaScript and so on, or put things in variables, and even if they did they would be limited to a browser and cross compatibility would kill the whole thing. Sorry, we gotta bite that bullet as technologists.

        Later on you talk about tool integration – Photohop and programming tools. Integration of disparate programs is the hardest of all. It comes back to cost.

         

        Publishing

        “The application sever needs things like version control baked in”. Any skilled system admin would tell you that  running development tools on a production system is not a good idea – complexity is one reason, secirity is another, but most of all only the necessary application should run in a production environment. This reduces the probability of a process going haywire. You can run an application server with version control built-in in a testing environment. I couldn’t agree more. But that’s not what you had in mind, right? And you have application integration again – the toughest of all.

        “In an ideal world, the publishing system could examine existing code and recoded to the the new system”. Those are the dream of programmers MACHINES THAT CAN THINK!!! Real AIs are decades away my friend. Current technology does not allow machines to evolve naturally. The hardware is not compatible with the goal. You should probably consider some paradigms in biology but with non-living materials the effect you seek will be hard to achieve.

         

        Conclusion

        “It takes the best ideas from AJAX, Ruby on Rails, the .NET Framework, content management systems, WbDav, version control, document anagement … and adds them into one glorious package. A lot of the groundwork is almost there, and can be layed on top of the existing technology, albeit in a hackish and kludged way”. So it is going to be a glorious package, in which things will be layed on top of the existing technology in a hacking and kludged way. What so glorious about the pakage then??? I’ve never seen a better oxymoron. The undertaking you are talking about is massive and will take years to dvelop. Some of the technologies are proprietary and you will not the vendors’ permission to use them and mix them as you see fit just because it will be easier for developers afterward. For example, Microsoft will never allow you to mingle .Net with J2EE. The players on the market protect they interests. It will take you years to negotiate the basic agreements – Human Factors, sorry.

        A friendly advice: please take a good Human Factors course. If you embrace the basic concepts there you will definetly see the world of programming with new and better eyes.

        I enjoyed writing these commets and I thank you for your opinion.

      • #3087750

        What To Do About The Sorry State Of Web Development

        by davidbmoses ·

        In reply to What To Do About The Sorry State Of Web Development

        This is the second time in two days I have been directed to Jakob Nielsen’s site http://www.useit.com/

        Am I the only one that finds his site on usability extremely hard to
        use. My eyes flicked back and forth for a long time when I first opened
        that page. I have a hard time focusing on the content with colored back
        grounds. I find the page offers no proper focus for leading your eye
        through the content. I’m sure after you use that site a few times, you
        get use to it.

      • #3087569

        What To Do About The Sorry State Of Web Development

        by bluemoonsailor ·

        In reply to What To Do About The Sorry State Of Web Development

        Critical:

        1. Inclined to judge severely and find fault.
        2. Characterized by careful, exact evaluation and judgment: a critical reading.

        You got definition number 1 down pretty good. Definition number 2 however, seems to elude you. “So why can?t we do this, aside from entrenched ideas and existing investment in existing systems? I have no idea.” Hmmmm… How about “Why can’t I climb Everest in one step, aside from it’s so tall? I have no idea”.

      • #3090537

        What To Do About The Sorry State Of Web Development

        by joeaaa22 ·

        In reply to What To Do About The Sorry State Of Web Development

        to;  davidmoses

        i agree with you about his site being less than user friendly when it
        comes to reading the content of Jakob Nielsen’s site.  for me it’s
        not so much the colored backgrounds as the fact that everything is
        thrown together in two big lumps of words.  there’s nothing to
        differentiate anything in the two columns.  at least he could have
        done it as an outline format of something.  even an unordered
        bullet list would help.

        and sorry about the lack of caps.  my keyboard seems to be unhappy with me at the moment.

      • #3090231

        What To Do About The Sorry State Of Web Development

        by Anonymous ·

        In reply to What To Do About The Sorry State Of Web Development

        J.Ja, I have three letters for you: lol
        The comments others submitted made several good points which I won’t restate.
        Sorry, but I could not let this page go without laughing at it… at least you were not completely wrong, just in most cases.
        No hard feelings.

      • #3084852

        What To Do About The Sorry State Of Web Development

        by kreynol3 ·

        In reply to What To Do About The Sorry State Of Web Development

        How can you say “…as well as completely smashing the separation of logic and presentation which I hold to be the best way of writing code” in one paragraph and in the very next paragraph say “The client-side scripting also needs the ability to open a direct data connection to the server.” ??  Do you know anything about database connections?  There is a cost for each usually.  You have a limited number of these.  You seriously want every browser that can be opened to have a connection to the database?  LOL!  Good luck with that.  Ever heard of a connection pool?  Look it up, you’ll do your employer a huge favor and maybe not look like a fool in your next design meeting.

        I was reading your post to this point and then realized I wish I had never started reading it.  I hope I can purge what I read from my mind or at least apply some kind of mental flag to it so I do not accidentally use it without remembering where it came from.

      • #3084789

        What To Do About The Sorry State Of Web Development

        by mgtucker ·

        In reply to What To Do About The Sorry State Of Web Development

        I “hear” this conversation like people standing around the general store discussing the best way to shoe a horse.  Shoeing horses became irrelevant when motorized coaches were mass-produced. 

        Whoever comes up with a “killer”-application that millions of users want, users won’t care if the application is written in Fortran.  People did not buy the first PC’s or Apple computers because they had an Intel or Motorola chip.  They bought their first computer to run a spreadsheet or type a letter (“killer” apps).

        The first 20 years of any new idea or technology is called the “golden years” of that new paradigm.  Automobiles, telephones, aircraft, radio, television.  Do the math for the personal computer and you see the “golden years” have passed.  The “Year-of-the-LAN” was “foretold” again and again until the mid-90’s and the [effective] “public birth” of the internet.  (Yes, I know the history of the internet.  I’m talking when the public “woke up” to the internet.  That is also when the public woke up to Email.)

        That means to me that web stuff will only carry its novelty another 10 years (give or take a few months).  Then, even this “web development” discussion will be (for all intents and purposes) moot.

      • #3086150

        What To Do About The Sorry State Of Web Development

        by deedeedubya ·

        In reply to What To Do About The Sorry State Of Web Development

        I’m sorry to hear that you don’t know quite what you’re talking about.

        For example, “Having a zillion HTML tags running around with
        “onMouseOver()” baked into the tag itself is much more difficult to fix
        (as well as completely smashing the separation of logic and
        presentation which I hold to be the best way of writing code) than
        having TagId_onMouseOver() in the block (or better yet,
        in an external file; JavaScript is capable of doing this, but it is
        rarely used).”
        Yes, it is rarely used, but that’s not JavaScript’s fault, is it? That would be the idiot implementers of JavaScript. Don’t blame JavaScript for it. Really, JavaScript is actually a wonderful language and has many useful and properly implemented features. I don’t claim it to be perfect, of course. People see JavaScript just like they do Flash. The idiots that implement both are typically idiots with no sense of style or structure. They make flashing doo-dads and nonsense instead of making proper code and systems. This is the fault of the idiot, not the languages.

        Your negativity isn’t much less than before. Why must you feel that everything needs to be tossed by the wayside? Do you believe we should just have a nice, big flood and start over? I’ve been told we’ve tried that one before. I’m sure someone has Noah’s email address if you want it.

        If you think you’re so smart, why don’t you write up some proper recommendations for us to implement. When we see your wonderful work, we will bow down to your superiority. Until then, how about you say something that might be useful.

        (By the way, this silly comment posting thing is difficult to use in Firefox)

      • #3085370

        What To Do About The Sorry State Of Web Development

        by kreynol3 ·

        In reply to What To Do About The Sorry State Of Web Development

        A 20 year lifespan huh?  Internet is gonna be dead in 10 years huh?  Wow, how do you get from point A to point B?  I still use a car on a road.  The car “idea” has been around for what maybe 100 years?  What about the road, because that is a better analogy for the Internet (infrastructure)…  let’s see… Roman empire…  I really don’t know the age of the idea of roads… I’d guess 2600 years – maybe alot more and maybe even some other even more ancient civilization…  If anyone is a betting man, I’ll take a bet with you that the Internet will be here for at least as long as television has been (which is also older than 20 years – and the Internet will be the end of television by the way).  Just a guess, but the Internet will out live me and hopefully I have at least 50 years left.

      • #3085362

        What To Do About The Sorry State Of Web Development

        by duckboxxer ·

        In reply to What To Do About The Sorry State Of Web Development

        No offense, but do you like anything?  Unfortunately you sound like a very unhappy person in need of a new career.

      • #3085145

        What To Do About The Sorry State Of Web Development

        by dna708133 ·

        In reply to What To Do About The Sorry State Of Web Development

        Hi all,

         

        All I can say is javascript and the html interface never realised their full potential////

         

        http://www.online-media-services.com

         

        dna

    • #3089390

      Bad Excel! Bad! BAD!

      by justin james ·

      In reply to Critical Thinking

      So here I am, checking out the Excel spreadsheet that my customer is using to compare it to what I am currently working on. To see if they made any changes to the ODBC queries, I popped it open in a hex editor. Guess what? Not only were the queries there (as I expected), the connection string, INCLUDING THE PASSWORD was there, in plain text! Unbloody real.

      J.Ja

      • #3089060

        Bad Excel! Bad! BAD!

        by somebozo ·

        In reply to Bad Excel! Bad! BAD!

        well didnt they teach u excel was only good for school assigments..not enterprise apps..

    • #3085647

      Hey guys, the floppy drive is dead

      by justin james ·

      In reply to Critical Thinking

      [addendum 3/5/2006] Just hand me the “Putz of the Year” award. There was a patch out there for Partition Magic which removed the error code I kept getting; it was blaming its inability to copy on a bad file system, when in reality it itself was the problem. Also hand our a “Putz of the Year” award (dated 2001) to the person(s) at PowerQuest that allowed that bug to escape their QA process, and to take quite a number of months (a year?) to fix the problem to their software.

      I’ve been struggling with a bad hard drive for a few days now. Five, to be more exact. Not the “struggling” where I don’t know what to do or how to replace it, but struggling with various utilities to get it to copy the data over to the new drive that came on Wednesday (today is Sunday). So far, I’ve tried the Windows XP Install CD (XCopy so graciously will not copy most of the files, even in Safe Mode), Partition Magic (keeps finding errors with the drive that CHKDSK isn’t finding/fixing), Maxtor MaxBlast, and Western Digital’s Data Lifeguard. What does this have to do with floppies?

      All of these tools, except for the Windows Install CD and Maxtor MaxBlast, let me create a “Recue Diskette” of some sort, but don’t have the option to burn this diskette to CD! Computers have been coming without floppy drives standard for quite a number of years, but the utilities haven’t caught up. If it was a matter of simply hooking up a floppy drive, there would not be a problem, I have a spare floppy drive on the shelf just for this type of situation. But the sad fact is, the motherboard on the current PC simply does not have a connector for a floppy cable! I bet 95% (or more) of people who purchased a computer from an OEM (HP/Compaq, IBM, Dell, Gateway/eMachines) during the last three or more years are in the same boat that I am.

      Without being able to have something that works with its own boot loader, this operation just does not seem to be working. For whatever reason, upon booting with the new drive, certain things are not working right; my Windows toolbar is lacking the address bar, and more important, Office 2003 just isn’t working right. It requests a reinstall, but the installer/uninstaller says that it is an invalid patch file. I tried the fix from Microsoft, and it worked, but with those two really obvious errors in the disk copy, I really did not feel like taking my chances and sending the original drive back.

      It is just a bit frustrating that system utility makers (especially Microsoft, who helped puch boot-from-CD for Windows installations, as well as pushed OEMs to ditch floppies) don’t take into consideration how many people simply lack a floppy disk. I’m sitting here, knowing that all I need to do is a standard Windows backup, create an Emergency Repair Disk, then run the Automated System Recovery, and I can’t.

      Have I ever mentioned how amazed I get by the lack of conseration on the part of developers for the users?

      J.Ja

      • #3085639

        Hey guys, the floppy drive is dead

        by sbrooks ·

        In reply to Hey guys, the floppy drive is dead

        Had the exact same problem a while back, then I tried an external USB floppy disk drive, booted Ghost fine and copied across to new drive with no problems. It’s likely that any new mobo without FDD connectors will support boot from USB and so should boot from the external USB drive. It is also quite easy to create a bootable CD using an FDD image file in most CD burning programs, I have used this method to create a bootable CD from a ghost floppy that also contained the OS image I wanted to install. No matter what the problem there is almost always a way around it!

        Steve 

      • #3085029

        Hey guys, the floppy drive is dead

        by georgeou ·

        In reply to Hey guys, the floppy drive is dead

        I?m with the previous poster.  Bootable CDs are the only way to go.  In fact, they boot 10 times faster than floppies.  I keep plenty of IMG floppy images and I?ve trained plenty of helpdesk staff to use them with bootable CDs that can mount network drives or mount USB mass storage devices so they can offload their files.

         

        This is also exactly why I put all my data in a partition separate from the C drive.  Microsoft should have separated user data in to a different partition with Windows 2000 by default!  Then I keep a perfect PQDI or Ghost image of my C drive and I just flash it with Ghost in 10 minutes from a bootable DVD and I?m back and running and all my data is perfectly intact in the other partition even though I re-imaged my C drive.

    • #3268283

      Message To Programmers: Try Using The Junk You Produce

      by justin james ·

      In reply to Critical Thinking

      Yes, this is another insentive, inflammatory “Programmers Stink” post.

      Why?

      Because THEY. MAKE. MY. LIFE. MISERABLE.

      Today’s problem?

      Visual Studio 2005.

      [addendum: 3/16/2006] A few commenters have made it clear that I didn’t explain the problem very well. I am not working with SQL Server. That has excellent management tools. I am working with SQL Server 2005 Express. It is the same database engine, but works off of files and is designed to be able to be installed onto a client when you install the software, this way you don’t need to worry about having the customer have a full database installed on a server. SQL Server 2005 Express has absolutely ZERO management tools outside of Visual Studio that I have found. No other tool that I am aware of “knows” how to access a SQL Server 2005 Express database, unless I were to set up an ODBC connection to it (I haven’t tried it, but it may work). The point is, I followed the directions in the help files, and they were incredibly difficult to use (this is the first time in the nearly 20 years that I have used computers that I had a problem with COPY/PASTE), and the software did not make it easy at all. Especially frustrating was the “You have an error!” box that popped up for each and every error without explaining what the error was or providing a way of stopping the process causing the error without killing the entire process. I want to see you click “OK” 24,000+ times and tell me that you’re working with smart software.

      I’m writing a small application and I want to use the SQL Server 2005 Express database engine. I have the data elsewhere, in a FoxPro table, that I want the application to use. I exported the data to CSV. Visual Studio won’t import that to the SQL Server 2005 database. FoxPro didn’t work either. Excel didn’t work. Text file didn’t work. In fact, there is no “import” feature at all, unless I want to write Transact-SQL code. At the end of the day, Visual Studio won’t copy a table from one data source to another. I can’t do a “SELECT INTO” between data sources.

      What do the help files say? Open the data and do a copy/paste. So I tried that. For hours, it was just trying to do one row at a time, and paste the contents into one cell. Eventually I figured out how to get it copied right. Why do I need to figure out how to “copy correctly”? Am I missing something?

      The piece of garbage decided to have an error on one column. Instead of telling me what the error is, it decided to be vague. To make it worse, it pops up an error on every single row that I’m pasting. All 24,000+ of them. It was faster and better to trash the whole Visual Studio session. Your “error handling” should NEVER make the user’s life worse. The joke is, Access does this well, it does the import, then create a second table with the errors. Simple. Elegant. Quick. Useful. Non-annoying.

      To all of you lazy programmers, stupid programmers, and other jerks who never actually thought about how your users would use your lousy software: I HATE YOU. I want to find where you live and cause you as much aggravation as your products cause me. For every hour of my life that a computer has saved, bad programming and poor user interface design wastes two. I want to find the moron who wrote this, go to their house, and waste 8 hours of their life, maybe force them to watch Bloodsucking Freaks 4 times. EIGHT HOURS OF MY LIFE HAVE BEEN LOST TO WHAT SHOULD BE A 5 MINUTE TASK. I am eternally grateful that I get paid salary and not per-project, I’d be eating out of Dumpsters at this rate…

      And the sad part is, this problem has caused me significantly less pain than Google’s Customer Dis-Service Department has recently.

      I swear, I want my next job to be 75% done with paper and pen…

      J.Ja

      • #3266182

        Message To Programmers: Try Using The Junk You Produce

        by apotheon ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        I know it’s not much consolation when you have to use specific proprietary applications for work, but you might try using more open source software. For the most part, open source software is developed by people who want to use it, not by people who are just being paid to produce something for others.

      • #3074898

        Message To Programmers: Try Using The Junk You Produce

        by dogknees ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        You seem to be abusing yourself! Visual studio is a tool used by programmers to write programs. If you’re using it you are a programmer. If it’s so bad, why aren’t you writing a better one?

        >>Why do I need to figure out how to “copy correctly”? Am I missing something?

        You have to figure out how to copy correctly, because like everything in life, if you don’t do it correctly it won’t work, pretty obvious really. This idea that somehow the system should figure out what you’re trying to do and how is ridiculous.

        If it gives you an error, that means you’re trying to do something incorrectly. If I try to tell my spreadsheet that 1+1=3 I expect and want it to complain, and the more insistent I become the louder and nastier should be the complaint. It should never “guess” that I meant to type 2 but was tired today. It should do preciselsy what you tell it to. If you tell it to do the wrong thing, that’s what it should do, and the consequences are yours.

        If I’m dumb enough to try pasting null data into a column that won’t accept nulls, it should refuse, for every line. I made the mistake, I should be subject to some consequences, how else does one learn not to do it.

        I’m confused though, if Access is the tool for the job, why are you trying to use something else?

         

      • #3074813

        Message To Programmers: Try Using The Junk You Produce

        by tom.bunko ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        Yeah, why can’t the software do what I want it to do, rather than what I tell it to do….

      • #3074768

        Message To Programmers: Try Using The Junk You Produce

        by dancoo ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        IMHO Visual Studio is for programming, not database manipulation.

        SQL Server has various tools built into it that can easily load data from places like CVS files. I used to always use the command-line too BCP, but there are SQL statement equivalents. I forget what they are, but try looking up the SQL Server help under “Load”. There is absolutely no need to do cut and paste. And BCP gives fairly helpful error messages, too.

      • #3074717

        Message To Programmers: Try Using The Junk You Produce

        by mindilator9 ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        LMFAO an even better question is why don’t you use php5 or python or something like that where YOU CAN ACTUALLY PROGRAM YOUR OWN ERROR MESSAGES! OMGZ! and use it fast and learn it easy. as was mentioned earlier, why are you using a widget builder to do simple database manipulation. god, no wonder you’re frustrated. open source is the way to go, as was also mentioned, because yes we do use each other’s stuff, and when we want to change it, it’s usually pretty easy. assuming you’re familiar with the language. all of your problems sound like an 8 year old who’s frustrated that his legos won’t assemble themselves into a masterpiece sculpture. you know that kid, he’s always trying to force different size or shape pieces to fit, sometimes breaking off a peg if it helps. take a lesson from the Demotivation posters, “When you earnestly believe you can compensate for a lack of skill by doubling your efforts, there’s no end to what you can’t do.” there’s a silver lining though, these mistakes you’re making that are pissing you off so much are actually teaching you what you need to know to be a good programmer. if you can see that. just stop blaming others for not building an IDE that reads your mind. i recently got an mvc framework dropped in my lap and the comments are somewhat lacking. i’m still struggling to find a way to subclass his controller in a way that plays nice with the rest of his subs, but while i hate that he didn’t make his code clear i don’t feel like he’s incompetent. the system works so obviously he’s not. i’m also relatively new to the mvc pattern having only played with a handful of test cases, so i account for my learning curve. i could easily moan that his lack of comments wasted two days of figuring out what implements what, but that’s just the breaks. the way i see it, whining about it is a bigger waste of time. you really want a job that’s 75% pen and paper? what’s the other 25%, using an abacus? or serving food?

        not trying to come down too hard on ya, but come on man your problems are your own.

      • #3074687

        Message To Programmers: Try Using The Junk You Produce

        by bluemoonsailor ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        1) Create an Access database

        2) Link to external tables, point it at the SQL 2005 mdb file.

      • #3074612

        Message To Programmers: Try Using The Junk You Produce

        by justin james ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        I’ve put my reply to these comments up (http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=184332&messageID=1979179&id=2926438) in the form of a new post. Many of these comments started a lot of thought in my mind, thanks a lot!

        J.Ja

      • #3077168

        Message To Programmers: Try Using The Junk You Produce

        by vaspersthegrate ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        Blame the “Don’t Worry Be Crappy” school of Guy Kawasaki for it. Or the “Screw Jakob Nielsen” design anarchist/narcissists.

        All it takes is 5 typical users in a well designed User Observation Test to get rid of most bugs.

        I feel your pain. And anger. Cuz I yam Vaspers the Grate. 

      • #3076989

        Message To Programmers: Try Using The Junk You Produce

        by laconvis ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        Insert comment text here

        With users like this it is no wonder that programmers refuse to communicate.  There is a partnership between good programmers and good users. 

        In today’s environment the idea of a job that does not use Information Technology is hillarious.  I do hope that you find such a job, maybe trash collector, sewer cleaner, or zoo cage maintenance.  They all have a future without any technology.

      • #3075160

        Message To Programmers: Try Using The Junk You Produce

        by lobster-man ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        You get no sympathy from me.  I have been programming for thirty years and have always stayed current.  When you look at how easy things are today to program compared to the Autocoder, assembler languages, fortran, and COBOL, you really appreciate how much Microsoft has helped the industry and specifically the programmer’s in that industry.  today I am using Visual Studio 2005 and have also used both SQL Server 2005 and the SQL Server 2005 Express tools.  Like any programming tool, they are difficult to learn, but once you know them they are productive programming tools.  The suggestion of using Access was a good one.  It is a simple database that has been working well for years.  Or why not develope using the full SQL Server product and then downsize to Express.

        Be thankful that you are in a profession that has been around many years and will continue to be around well into the future.  Don’t get frustrated if something takes a litlle while to learn because it will pay off later.

        CharlieC

      • #3075075

        Message To Programmers: Try Using The Junk You Produce

        by kbremer1 ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        Unfortunately with some free software you have to go that extra mile to get the tools/help that you need.  You should have download the “SQL Server 2005 Express Books on Line” and do a search on Import.  Then you would have learned about using the BCP command line application which can import and export data from and to files.  

        I would also suggest that you download the “SQL Server Management Studio Express 2005 CTP to help you with creating ad-hoc query and managing your databases.

        Ken

      • #3264980

        Message To Programmers: Try Using The Junk You Produce

        by narg ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        I peronsally do a little programming, and a know a lot of high end programmers.  Trust me, I’ve asked the same question to them, and myself as well.  It just doesn’t work like you’d think.

        Programmers know their stuff in-side and out.  They know where the buttons to do this and that are.  They know why the program acts like it does.  So, in the end they basically become complacent (sp?) about their work.  I’m not meaning in a bad way, but they learn to overcome what their work will and will not do.  Place that same product in the hands of an average user and lo-and-behold!!!  The difference in interaction can be eye-opening to a truely concerned programmer.  (personally I think Vista is heading in this direction, they are over-killing the “useability” by taking away the interface tools we are used to, but that’s another discussion.)

        It’s virtually impossible to allow the interaction of 6 billion people into a program’s abilities and shortcomings.  I’ve tried, to interject as much user interaction and considerations into my work, and it’s impossible to please all of them.  True, some programs seem to have attempt no usability considerations at all, but most do.  It’s a daunting task to say the least.

        My personal Microsoft favorite response is “it works as designed”.  You know they banged their heads on how to make a program respond to certain input, then gave up thinking “that’s good enough”.  Makes you really wonder sometimes though.

      • #3106043

        Message To Programmers: Try Using The Junk You Produce

        by bjasper1 ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        This person likely has never worked with a SQL Server of any kind before. Pasting data into a table is easier than creating the table to begin with.

        But even the most experienced database programmer would try a small sampling of data prior to trying to paste 27,000 rows.

        I find it interesting that people are touting the benefits of open source for someone who does not know what he is doing to begin with.

        When looking at the poster’s profile he insuates he is a programmer. I know if his resume ever crosses my desk I will not be hiring him.

      • #3271261

        Message To Programmers: Try Using The Junk You Produce

        by daferndoc ·

        In reply to Message To Programmers: Try Using The Junk You Produce

        Hear, Hear!!!  I have been saying the same thing for 25+ years.

        Does anyone else believe, as I do, that Microsoft programmers do all their development work on Macs??  If they used Windows, it might work.

         

        FLS

    • #3074614

      Users never do the wrong thing, you wrote the wrong code

      by justin james ·

      In reply to Critical Thinking

      This is a response to a comment on a previous article: “Yeah, why can’t the software do what I want it to do, rather than what I tell it to do….”

      <sigh>

      This is one of those cases where I get mad that computer communication doesn’t carry vocal inflection. If you’re being serious, then you truly understand what my gripe is. If you’re being sarcastic, it’s exactly that attitude that is the problem.

      The user should not have to adapt to the software. The software should adapt to the user. The user pays to use software in order to gain productivity. The software designer is not paying the user. If your software is not easy to use, the customer can always go get someone else’s software. If the user experiences any pain while using your software, whether through your poor design or their own ignorance or incompetance, someone else will be more than happy to write software that helps that user and take their money away from you.

      I recommend that anyone who writes software that interacts with humans try using Perl for a few months. The whole language has “does what you mean, not what you say” as an explicit design goal. As a result, it is one of the most pleasurable languages to work in that I have ever used. The language itself vitually never gets in your way. It is such an incredible world of difference. .Net languages, Java, and other langauges I have worked with (recently) have only been able to keep up with Perl in terms of the ability to rapidly develop software because they have such gigantic libraries. As languages in and of themselves, they are lightyears behind.

      Why am I mentioning all of this here?

      Because of the concept of “gee, well you’re the one who clicked on the wrong thing.” Wrong. Unless the user makes an accident, such as a typo or a mis-click, the user has always done precisely what they thought would accomplish their goal. ALWAYS. To beleive otherwise is designer hubris, an attitude that needs to be eradicated from your thinking if you ever want to be a great (or even good, let alone employed) programmer, engineer, or any other designer.

      Here’s an example: propositions on ballots. If you have ever lived in a state that does these, you know exactly what I am talking about. You are presented with one to five paragraphs of dense legal text, and end the end, asked to vote “yes” or “no”. Here in South Carolina, a proposition barely passed that was heavily demanded. Exit polls indicated that it had won overwhelmingly. What happened? A significant number of portion of voters simply did not understand which of the two choices they were presented with would actually vote for the proposition. I know that I re-read it three or four times, and still was not sure if “Yes” would vote for the end result that I knew I wanted. Post election polls that asked what choice people made, and compared it to what result they wanted showed this. The ballot was poorly designed.

      Software is all too often designed by people who have this designer’s hubris. If your software reacted badly to a user’s input, it is because you failed to perform proper error handling. More importantly, you failed to comprehend your user’s needs. Web forms are notorious for this; instead of anticipating that a user may put dashed or spaces (or no delimiters at all) into a credit card or phone number entry field, and stripping out non-numerics, then trying to process the card, they limit the user’s input. The user doesn’t find out that the field is limited to 16 digits, then they need to go back, remove the dashes or spaces, then finish entering the credit card number. Frustrating! Just make the field bigger! More importantly, those dashes/spaces help the user ensure that the numbers were put in correctly. It takes but one line of code to strip the junk characters out and reformat that credit card number (in Perl; other languages need a zillion lines to make and use a regex object…). Either way, five minutes of programming will make your users happier and provide you with more accurate data input.

      Instead, we get websites that kick you back with an error message (often hard to find the error message on the screen, to make it worse). That’s designer hubris in action.

      Let’s review some of the comments the previous article received to see some of this designer hubris in action:

      “I know it’s not much consolation when you have to use specific proprietary applications for work, but you might try using more open source software. For the most part, open source software is developed by people who want to use it, not by people who are just being paid to produce something for others.” – apotheon

      Sadly, much (if not most) open source software has even worse user interface design than proprietary/commercial code. All too often, the answer to even a simple item is in the manual, not the interface. A user should never have to read the manual for anything but the most complex tasks. How to accomplish simple, day-to-day tasks should be intuitive and easy. Too often, I have found using open source software to be like using a website where many of the pages aren’t linked to, but need to URL typed in manually.

      “You have to figure out how to copy correctly, because like everything in life, if you don’t do it correctly it won’t work, pretty obvious really. This idea that somehow the system should figure out what you’re trying to do and how is ridiculous.” – gbently

      Wrong answer. When someone has table data in the clipboard, and is pasting into a table object, it is fairly obvious that the user is trying to populate the table with data, not an individual cell. If the program is in doubt, it should simply ask “what are you trying to accomplish, how can I help that to occur?”

      “IMHO Visual Studio is for programming, not database manipulation.” – dancoo

      I agree 100%. To load the data into MySQL, I used the LOAD DATA statement. If I wanted it to go into SQL Server 2000, I would have (probably) used the Import Wizard. If I wanted it in Oracle, I would have used sqlldr.exe. But I wanted it in SQL Server 2005 Express, which has no management tools that I could find. I followed the directions in the help system. They said to use copy/paste. The database tools in Visual Studio aren’t so hot. But they are still better than nothing, much of the time, and in the case of SQL Server 2005 Express, they seem to be the only thing going.

      “LMFAO an even better question is why don’t you use php5 or python or something like that where YOU CAN ACTUALLY PROGRAM YOUR OWN ERROR MESSAGES! OMGZ! and use it fast and learn it easy.” – mindilator

      Wow, this one had me laughing. I am not quite sure how me using Visual Studio compares to me using PHP or Python. The last I checked, I was trying to load data into a database; what language I was writing my software in is 100% irrelevant. Even if it was relevant, this poster makes a really stupid assumption: that I had much of a choice in my language. This person is obviously trying to make me look stupid, saying that I am too stupid to use PHP or Python, and that I’m a fool for using Visual Studio. What if I told this user that I was using IronPython for Visual studio, or Perl.Net? Would that make them happy? Probably not. It also shows this person’s ignorance. Visual Studio does a lot more than write web applications. In this case, I am writing a desktop application. This is why I was using SQL Server 2005 Express in the first place. PHP and Python aren’t too much help when writing a Windows desktop application!

      “just stop blaming others for not building an IDE that reads your mind.” – mindilator

      This is actually one of the things that a good IDE does. A good IDE, for example, when debugging, doesn’t just say that an exception occurred; it shows me the error. That’s a form of mind reading, it knows that I want to see the error that occured. More to the point, I was simply following the instructions. My intial thought was to find either an import wizard or an import utility. The documentation did not mention either of those. Most importantly, a well designed system does exactly what I need it to do. If I need to treat my computer like a newborn puppy, I might as well give up now.

      “i could easily moan that his lack of comments wasted two days of figuring out what implements what, but that’s just the breaks.” – mindilator

      Not only could you, you should be concerned about his lack of comments. You wasted two days of your time working with his system. If he had spent a few hours writing some basic documentation, you would have saved a lot of time. Furthermore, what if he had chnaged the function of a parameter, but not the name? Without that documentation, you would need to inspect his code line by line to see what each parameter is sued for. Additionally, the process of writing comments serves as a code review. Many times, it is during the comment-writing phase when it becomes obvious how to streamline architechture, add in additional functionality, and so forth. Good and great programmers write comments. Bad and average programmers don’t. I’m not saying that writing comments is enjoyable; it really isn’t. But the lack of comments is disturbing, particularly if he was aware that this was code that other people would be working with.

      “you really want a job that’s 75% pen and paper? what’s the other 25%, using an abacus?” – mindilator

      I’m actually glad you mentioned the abacus. Many abacus users are significantly faster at performing math than many computer users. What does that tell you? Yes, I want a pen and paper job. When I am programming, I spend about 50% of my time not writing code, if not more. I spend a good portion of time performing planning on paper. I spend time talking to my customer or end user determining their needs. I spend time sketching potential screenshots and showing them to others for critque and input. By the time I’m actually sitting down and writing code, it is a fairly quick and trivial process. It doesn’t take much time to write the code itself, and debugging is typically fairly quick as well, assuiming I have access to a quality debugger. Things like breakpoints and watches (vs. “print ‘Executing line 678, the value of sFileName is ‘ . sFileName;”) cut debugging time by about 75%, and often help you find problems before they even occur, because you get to watch the logic you wrote in action.

      The point is, the user does not pucnh themself in the head. A user who “clicked the wrong button” and wiped out their data is the victim of a bad interface design, not an idiot. Users don’t say to themselves, “gee, I want to wipe my hard drive”, they do these things because the design stunk. It is my beleive that user interface stinks, because many programmers simply do not pay close attention to the needs of their users, do not test with real-life users, and have a bad attitude towards users. It’s time the designers started paying better attention. Look at Google. Google went from a nobody to a market giant, by simply helping the user accomplish their goals (searching the Web for data) better than anyone else. They did this by providing a significantly better user interface, and by taking the time to write a better search algorithm. They identified the user’s need, and wrote software that met that need while simultaneously working the way the customer needed it to, instead of making the user work the way the software did.

      J.Ja

      • #3077167

        Users never do the wrong thing, you wrote the wrong code

        by vaspersthegrate ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        Why don’t software providers do real User Observation Tests with typical customers? This is how most bugs are to be caught.

      • #3077064

        Users never do the wrong thing, you wrote the wrong code

        by dougb ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        J.Ja,

        I agree; 150%. One of my goals as a developer is to anticipate what a user might do accidently and program allowances for that. If you want to enter today’s date in an application, you can enter 3.17, 3-17-2006, or 03/17/06. If you enter something my program does not understand, I do not pop up a message box that says “Error somewhere on the form. Try to find it” It states the date is not recognized and suggests the proper format, sets the focus to the date field and selects the text.

        Too few developers out there go to that level of detail. They want to get it cranked out, get there money, then move on to the next project. From what you have described about SQL Server Express, I would say you have every right to be frustrated. A database engine that cannot import/export data is about as useful as a canteen in the desert with a 2″ hole in the bottom of it.

        DB

      • #3075257

        Users never do the wrong thing, you wrote the wrong code

        by metalpro2005 ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        I agree to a great extend to the comments laid out here. However there are more dynamics in software development which s
        hift the focus from ‘UI quality’ (or process support) to ‘producing code’. Clients tend to see software development as a technical issue. So clients put the emphasis on producing code (i.e. giving the user MORE options to get lost in) than doing the right things to support end-users.

        This does not mean the programmer is off the hook here, but my experience as a programmer is that with the best intensions (trying to convince management otherwise) there simply is not enough time (commitment in every sense of the word) to do the ‘right thing’ especially in a web environment which is stateless and platform independent by nature.

      • #3075178

        Users never do the wrong thing, you wrote the wrong code

        by lee ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        My favorite definition of a software error comes from “Software Reliability” by Glenford J. Myers.  I am showing my age now, but this book was written in 1976, this topic is not a new one by a long shot.

        “A software error is present when the software does not do what the user reasonably expects it to do.  A software failure is an occurrence of a software error.”

        Of course, I have heard developers react to that with “How can I possibly know what the user reasonably expects?”  Well, isn’t that half of the task of being a developer, finding out what the user’s expect?

        Lee

      • #3075045

        Users never do the wrong thing, you wrote the wrong code

        by apotheon ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        re: open source software

        There’s a big difference between a v0.3.1 piece of open source
        software that is still only being developed by the original author and
        two of his buddies and a v1.0 or greater piece of open source software
        that has thousands of users, out of whom hundreds are contributing code
        to make it more “intuitive” and easy to use. I suspect you’re
        misjudging open source software by choosing your examples poorly.

        I’ve got news for you that you may find shocking: the examples
        you’re holding up as perfect specimens to demonstrate the differences
        between DWIM and simply doing what the hubristic designers expect
        people to bend over and take are great examples of how a mature OSS
        project kicks the butt of closed source proprietary projects almost
        every time. .NET and Java are closed source solutions. Perl is an open
        source project.

        Languages similar to Perl, such as Ruby and Python, are also open
        source projects. They also implement great regex capabilities (though
        Perl is still the best at regexen). The Rails web development
        framework, which uses Ruby, is about the most unsurprising (in terms of
        “when I do this, it should respond the way I expect”) development
        framework I’ve ever seen, though it’s unfortunately very surprising to
        come across something that works so well and easily for web application
        development. The Ruby language and community themselves are very
        focused around the “Principle of Least Surprise”, where it’s assumed
        that good design means things should work in a way that makes sense.

        All of this sort of thing comes about because of the open source
        development process, and the presence of a development community
        rather than a proprietary development shop run by a vendor. When
        your developers and users are one and the same, mature software means
        software that works exactly as you think it should.

        By the way, you seem to be a pretty big fan of Perl. Have you
        checked out the perlmonks.org community? You might find something there
        you like.

      • #3074996

        Users never do the wrong thing, you wrote the wrong code

        by kevin.cline9 ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        “Good and great programmers write comments”

        Good programmers write code that is understandable without external comments.  See Martin Fowler’s Refactoring for some specific techniques of comment elimination.

      • #3076391

        Users never do the wrong thing, you wrote the wrong code

        by dogknees ·

        In reply to Users never do the wrong thing, you wrote the wrong code

        Surprisingly, I agree almost entirely with what you’re saying here.

        However, there is a fundamental divide between user level applications and development tools. In development, you need to know exactly what your code and the rest of the system are doing. I’m at a loss to see how you can develop software if you don’t have precisely defined syntax and semantics that completely define the environment.

        That was my main point in the post to which you responded. You’re using a developers tool, developers tools must act in a precisely defined way to be useful. It’s not at all helpful for them to fix things up in the background.

        I’d still argue though that there is a limit to what the application should accept. At a certain point, it may become obvious that the user has no idea what they are trying to accomplish. Is is appropriate to keep guessing, or is it better to stop them, tell them to go read the manual and try again.

        My example about attempting to force a spreadsheet to get a total of 3 when it adds two values of 1 is also relevant. The software shouldn’t allow the user to make blatant errors of logic. If I try to turn a Gif into a Midi file, the system should complain, it’s a nonsense to attempt this.

        I guess there’s another side to this. I like being told I’m wrong, it teachs me something, which is a good thing. If I’m not doing something in the right way, I want to be told there’s a better way rather than the PC let me continue doing something dumb. I’m perpetually surprised when other react badly to correction. But that’s just me.

        Another issue. My definition of well written software is software that conforms precisely to it’s specification in every detail. It’s obviously the task of user-interface designers, analysts and system architects and customers to define the system. All the programmer does is implement the specification as written. If there’s a problem, it’s not with the programmer, but with the designer/analyst, and the management that decides what resources are to be expended on the development.

        Regards

    • #3076990

      The user/developer disconnect

      by justin james ·

      In reply to Critical Thinking

      I’ve been blogging a bit lately about poor user interfaces and the developers who create them. Sometimes, the problem with the end result of software is caused by the initial project specifications. Here’s are of some of the major pitfalls that can occur in any project.

      The initial project specifications are not correct

      All too often, the user has an idea of what they want in mind at the beginning of the project, and that vision is completely different from what they want by the end of the project. A project that takes a long time to develop is sure to not be finished before the business rules or needs change. Another problem that I have seen time and time again, is that until the user gets their hands on a working version of the product, they do not get to find out if their initial perceived needs are their actual needs. At the beginning of a project, the user may only want a few options, but once they see the power that the software is giving them, they want a lot more options. Or to go the other way, they ask for a thousand options or ways of doing slightly different taks, but find out that in reality there are only a few features they actually use, and the rest are waste.

      I combat this by providing sketches and mock screenshots throughout the development process. I try to involve the user as much as possible, because I have found that it is always better to spend an hour giving them a prototype (even if none of the widgets work) that let’s them get an idea of how things will work, than it is to spend weeks or months writing code that they don’t want.

      Users seem to think that low-output projects are low work

      This is one of my favorite user misconceptions. They seem to think that the number of results directly impacts the length and difficulty of the development process. Something happened today that is an excellent example of this. I am currently working on a system that ties maps to the user’s sales data and sales representatitve data. One of the people in my company sent me some information to think about using “as a demonstration.” He seems to think that it should be easy or trivial for me to create results based on this data, because the data set is significantly smaller than what we normally work with. Wrong! It is no more difficult to program to handle 1 million rows of database data as it is to handle 100 rows of data. It is almost always the number of columns and the complexities of the data transformations that make a project more or less difficult. I frequently have customers ask me to run a report ad hoc that will only generate a few “bottom line” numbers. They think that because they are asking for only one row of data that this is a trivial task. It is just as much work to produce the “bottom row” numbers as a full report, because the full report needs to be made to generate the bottom row of it anyways.

      The only way to beat this one is through educating your users. But there is a way to be nice about it, and give them a value add as a bonus. For example, in the bottom line number scenario, I go back to the user and say to them, “listen, to generate those bottom line numbers, I need to do a full report anyways. Would you prefer to get the full report while I am at it? It will take only a little bit longer to be able to provide a full report to you, and then you will be able to have the individual numbers as well.” This way I have impressed upon them that while the amount of output they are asking for is low, the amount of work is the same as a high-output request. In addition, I am giving them the option of having information that they might not have asked for because they thought it would be too much additional work to produce. The users are very happy with this approach, and after a certain point they come to you and say “would it be much harder to do XYZ versus ABC?”

      The project specifications do not match the users’ goals

      This one is always a problem, particularly when the people performing the work are separated by one or more degrees from the users. In fact, like a game of “Telephone”, the more layers between the developers and the users, the worse the problem gets. For example, I recently had a user who needed to get names and addresses from an Excel spreadsheet, insert the information into some text to generate a form letter, and be able to print those letters at Kinko’s. This was their actual goal. My project specifications were “take this Excel spreadsheet, insert these values into the text already entered into these cells, then save the individual sheets to PDF format, and make sure they look good.” Between trying to take flowing text in Excel and making it “look good”, and producing PDF output, I could tell that someone had given me project specs that were not properly aligned with the user’s needs. They had given me project specs that matched what they thought would be the best way of performing the task, not the task that the user actually wanted to be done. I dodged this by asking to know what the user had actually asked for. Once I found that out, the project was much easier – it took a few hours as opposed to a few days.

      Whenever you are given project specs, you need to get access to the user. In some companies, you may need to demand it. Many project managers get the attitude that you, the developer, needing to speak to the user reflects poorly on them as a project manager. That may be so, or it may not. But at the end of the day, you are going to be writing the code, not the project manager. It is in everybody’s best interest for your project specs to accomplish the user’s goals, not to fulfill a PM’s vision of how you should accomplish what they perceive the user’s goals to be. A PM who refuses to allow users to interact with the developers is a bad PM. Currently, my customers have my direct number. They take their requests directly to me, and they love the “self serve” aspect of it. Does my boss get involved? Sure he does, he needs to make sure that one customer’s needs don’t interfere with my time on other projects, that they are aware of what is within scope and out of scope of the contract, and so forth. But at the end of the day, the user is getting exactly what they need, they are in control, and they are delighted to spend the money because we are doing for them what their own staff cannot or will not do.

      At the end of the day, the smaller the gap between the user and the developer, the better the product will be in terms of meeting the user’s needs and expectations. Good communications between the suers and developers, while being sometimes difficult, tend to save significant amounts of time and frustration. It is always better to spend an hour talking to the user, even if they are some empty-head sales manager to get to know their real needs, than it is to spend three months designing and developing a proejct, only for the user to never actually use it, or reject it outright, because it doesn’t actually do what they need, even if it does what they asked for,

      More to come on this topic over the next few weeks…

      J.Ja

      • #3076963

        The user/developer disconnect

        by wayne m. ·

        In reply to The user/developer disconnect

        Look At XP & Agile Development 

        I would recommend looking at Afile Development methods in general and Extreme Programming in particular.  These approaches address many of the issues described above.

        Agile methods focus on eliminating degrees of separation between developer and user and focus on providing quick turn around of functioning (not just shell) software.  Miscommunication is always possible, but it can be reduced by eliminating middlemen and quick turn around ensures the miscommunication that does occur is identified and addressed as quickly as is possible.

        It is quite a mind shift for many to realize that adding ever more and ever more thorough levels of review of intermediary products exasperates the problem instead of improving the situation.  Look at XP, I think you will find that it addresses the issues described above and more.

      • #3076389

        The user/developer disconnect

        by dogknees ·

        In reply to The user/developer disconnect

        >>At the end of the day, the smaller the gap between the user and the developer, the better the product will be in terms of meeting the user’s needs and expectations.

        Agreed and agreed.

        There’s two sides though. Doesn’t the user bare some responsibility for taking the time and making the effort to learn about the options they have? It’s ultimately their responsibility to ask for what they really need. It’s not mine to try and lead them along some path of learning to think logically and thouroughly about their needs. If they don’t have the skills, then they need to go and learn them, I’mnot a teacher.

        I’d expect the builder of my house to be familiar with the latest technologies available to him. I similarly expect a knowledge worker to make themselves familiar with the latest tools available to them, which in large part is software.

        Regards

      • #3076166

        The user/developer disconnect

        by wayne m. ·

        In reply to The user/developer disconnect

        Disagree – Developer Is Responsible

        Remember, the user already knows how to do his job and has been doing it.  The software is developed to make it easier for him to do his job.  As the technical expert, it is up to the developer to propose how the software can help the user.  The analogy of the house builder started out correctly, but the conclusion reversed roles.  Just as the house builder needs to provide expertise in the technologies available, the software developer must also provide this knowledge.

        Remember, software exists only to automate manual tasks.  If the automation provided by the software is not obvious, the user will not know to look for it and will continue to use the manual approach.  Always remember that the user-developer relationship is not peer-to-peer, rather the developer is providing support to the user and the developer cannot expect the user to meet him halfway.

      • #3076935

        The user/developer disconnect

        by fbuchan ·

        In reply to The user/developer disconnect

        As a developer for more than 2 decades, I have found that there are two basic situations that work poorly for everyone involved.

        The first is letting the client abdicate their responsibility. The developer (or some representative like a project manager) must focus the client, shape their expectations, and engender a transfer of their knowledge. Just as we don’t expect clients to be savvy about development options and techniques, they cannot expect their developers to be aware of their job needs. Requirements analysis is the foundation for communicating needs. It is the client responsibility to make an effort to convey those needs concisely and effectively, with help, of course. The method of requirements analysis is almost irrelevant as long as it results in good communication, but we can never forget the problem with most software is that clients got exactly what they asked for in many cases, rather than what they needed. Heck, when I buy a meal at a restaurant I order what I want, and modify it as needed even then with comments like, “light on the salt.” Given the price of software, clients have to expect to provide valid and timely input.

        The second fatal approach to any software development is to expect developers to design a workable interface in a vacuum. As soon as I hear the words, “You know what we need,” and hear a developer reply, “Yeah, I do,” I shake my head sadly. Developers have a different focus than end-users, and cannot be expected to become cross-domain experts to the degree where they can magically envision some for of ultimate user interface. The only way they can even come close is if they shadow the user community for days, weeks or months. If software is modelling a process of any degree of complexity, the interface will either be complex and poorly done, or simplified overly, unless the use experts (the end users) actually participate. The clients understand already what will be an assist, and the developer must be made to understand, not allowed a pretence of understanding.

        As for the comparison to the building of a house, consider that it is usually problematic to have a general contractor do your plumbing. It is far better to think of developers as medical providers. Some are GP types, and some are surgeons. You wouldn’t hire an eye, ear and nose doctor to do brain surgery; and while a brain surgeon can tell you that you have a cold, you’d be a fool to have him open your skull to do so. Expertise requires focus, and focus can be counterproductive to end-use simplicity. The best example of the gap between the developer and the user is seen in some excellent open source products like FireFox. Powerful and incredibly solid, yes; too damn hard to use for a majority of average users, apparently. Or, closer to home for developers, consider the latest Visual Studio product: Alongside vast improvements in the user interface are pockets of aggravation caused by the fact developers cannot even reliably create interface for developers.

        Finally, to add one pitfall to the list generated so far: “lack of value proposition is a serious challenge in developer/user relationships.” Users simply don’t know how to equate the cost of an undertaking with its value, and frequently cannot grasp how making their latest proprietary application look and feel like MS Office is a million dollar prospect. While I have been lucky to never walk into a development where the value proposition connection was totally lost, I have watched a few go off the rails while developers struggled to deliver impossible expectations for pennies. And I did once actually have to recompile an entire application to change the label beside a field on a screen called “Inspection” so that it read “Inspection Description” rather than “Description.” (And, yes, they had approved the field labels previously, but thought description was confusing.) Until clients understand the cost versus value of their requests, I fear that the design process will never truly mature.

      • #3075912

        The user/developer disconnect

        by dogknees ·

        In reply to The user/developer disconnect

        The example I gave regarding builders etc. is in regard to the client. In my environment, my clients are information workers. In the same way I expect a builder to know his tools, I expect them to know theirs. As the primary tools of the knowledge are software, I don’t think it’s unreasonable to expect a basic level of understanding and for them to spend some of their time and effort getting themselves to this level. I see it as part of any professionals job.

        I generally agree with the comments. But I still say there is a degree of responsibility that must remain with the client. To ask or what you want is the fundamental one. Despite the advances in science in the last few decades, mind reading is still not commonly available. If the client can’t spell out what they want, they haven’t given it sufficient thought or effort. A very wise person once said “if you can’t measure it, you don’t understand it”. I’d paraphrase that if you can’t define your requirements, you don’t know what they are”.

        If you don’t really know what you want, how are you going to know when you get it?

        One of the respondents made the comment that software only exists to make tasks easier. Disagree vehemently. There are whole ranges of applications that have only become possible due to cheap fast computers. They weren’t done at all before. Simulation, various kinds of graphics, deep analysis of histroic data, etc.

        Regards

      • #3265560

        The user/developer disconnect

        by sghalsasi ·

        In reply to The user/developer disconnect

        When I think of developer-user communication gap and project failure it is because of both of them. But I think as the user is the origin of the requirements a rigorous brainstorming must be done to make the user communicate effectively. It should be seen as a seperate responsibility of the organisation to equip their employees to tell the requirements when the decision to go for a new system is finalised. Only the developer cannot be held responsible for requirements or project failure  

      • #3265496

        The user/developer disconnect

        by al_lee ·

        In reply to The user/developer disconnect

        Interesting to read some of the comments within the thread of the
        blogs. Everyone seems to be in agreement that proper requirements and
        immediate access to the user are the keys to successfully generate
        requirements and hence complete the work accurately and as expected.
        Lets level set the discussion with the over-simplified statement above.

        I’ve also read about processes and analogies regarding the proper
        delegation of responsibilities (ie. doctors and builders) to properly
        execute on projects; yet I’ve not heard from the development community
        one comment on properly delegating the tasks to the appropriate
        skill-sets. Introducing and utilizing other development resources
        (multi-disciplinary approach) to be responsible for defining,
        understanding and generating the ‘users’ requirements.

        I’ve practiced XP/Agile/MSF/IPD/Chrystal/UCD/PMP/et al, I’m a PM with
        an analyst background and I’ve never started a successful project
        without the appropriate resources and skill-sets. What I’m talking
        about is a multi-disciplinary approach. I delegate the requirements
        realization to not only a lead developer but also a User Experience
        Designer. This individual has a usability background, is a visual
        designer and an information architect. You’re worried about taking what
        the user says as what the real requirements are? These are the people
        with the skills to do the proper ‘shadowing’ and interviewing
        techniques/observations/questions that will realize the true tasks,
        activities and translate them into accurate requirements that are
        represented within the interfaces. They will even perform quick
        usability studies to validate the designs prior to development’s
        implementation. This provides enough time for the development group to
        absorb the business requirements to tranlate them into technical
        requirements, execute on low-level system designs, proof of concepts
        and anything else required to make the technology work best.

        Maybe I’ve been lucky in that I’ve had very little difficutly in
        selling this to a client or organization, everyone from stakeholders to
        the project team can understand and accept the approach, but I’ve found
        this to be one of the most effective methods for interfacing with
        users. Developers should not be attempting this alone, they will
        already have too much on their minds to effectively provide this
        service. You would never expect the developers to write end user
        documentation, why would you expect them to go it alone to realize the
        requirements?

        The industry has to realize that in order for projects to succeed we
        need the appropriate skills represented to provide the level of
        competence and expertise to succeed. In order to do this we need to
        educate ourselves and to introduce other possiblities that broaden just
        the ‘development’ world, with methods that work and that don’t work.
        Hopefully this helps someone.

    • #3074397

      Programming “How” versus programming “Why”

      by justin james ·

      In reply to Critical Thinking

      I want to send my thanks out to all of the readers and commentators who have been spending so much of their time to post comments, debate, criticism, and more on my most recent string of blog posts. You guys are the greatest, and I think it is really awesome that you are putting as much, if not more, time and effort into discussing this blog as I spend writing it. My recent post regarding the relationship between users and developers has started a great thread of comments. One of the comments, from gbentley, provoked quite a bit of thought in my mind:

      “One of the respondents made the comment that software only exists to make tasks easier. Disagree vehemently. There are whole ranges of applications that have only become possible due to cheap fast computers. They weren’t done at all before. Simulation, various kinds of graphics, deep analysis of histroic data, etc.” – gbentley

      This quote contains two commonly accepted ideas. The first is that “software only exists to make tasks easier.” The second idea is “[t]here are whole ranges of applications that have only become possible due to cheap fast computers.”

      Are these two ideas actually in opposition? Are either of these ideas even true?

      “Software only exists to make tasks easier.”

      I will call this ?the assistant argument?. This one is so close to being a truth with no questions required of it. But it has a really big problem: the word “only”. Without “only”, this is a “duh” statement, of course software exists to make tasks easier. But to claim that software’s only purpose is to simplify tasks, without enabling new tasks? That’s a pretty bold claim. But it is easy to see why people hold this idea. Email replicates the postal system or letter and packages, spreadsheets replicate ledger books, word processors replicate typewriters (which themselves merely replicate paper and pen), and so forth. Indeed, with the exception of software that are directly related to computers, there are very few pieces of software which do not replicate an existing task that can be done without a computer (the qualifier is in reference to things like development tools or backup software; without computers there would be no need to write software and nothing to backup).

      “There are whole ranges of applications that have only become possible due to cheap fast computers”.

      I call this idea ?the enabler argument?. This claim seems to be much more reasonable than the first one does. It does not deny that software simplifies many tasks, but simply states that software and computers allow work to be done that could never be done before. This claim has two possible variations. The literal variation boils down to ?there are certain things that simply cannot happen without a computer.? This is pretty hard to prove. Even in the examples given by gbentley are all things that someone with amazing patience and/or dexterity, or a huge team of people, could do without anything more than paper and pencil. From there, we have the pragmatic variation: ?there are certain tasks, that while able to be performed without a computer, that it is unrealistic to expect them to be done without a computer.? gbentley?s examples, I believe, fall firmly within this argument, and quite nicely at that.

      It is actually impossible to evaluate either of these claims without an understanding of what a computer is.

      For example, I could state, ?a ledger book is a form of computer, albeit a mechanical one; it is a mechanical database where all C/R/U/D (Create/Read/Update/Delete) is performed by the user with a pencil, and all calculations must be performed by the user, but it is a database all the same. With that statement, we have essentially said that even a checkbook is a computer, and more or less, even the most basic accounting is impossible without either a computer, or superhuman memory and math skills.

      Or one could interpret ?computer? to mean any electronics device using a binary storage and logic system to represent arbitrary data. This would be more appropriate; anything that we accept as a ?computer? is a binary device. Or is it? Scientists are working very hard to develop computer that make use of quantum states, where the answer set is expanded from ?on? and ?off? to include ?maybe?. Other scientists are working on computers incorporating organic matter, which requires an analog interface. Even a hard drive is hardly a ?binary? device. It records magnetic states. Magnetism isn?t an ?on? or ?off? property, it is a relative property. A hard drive is really reading the relative strength of tiny magnetic fields. If a field has more than a certain amount of strength, it is considered ?on?. So it is hard to say that the use of binary storage or processing is what qualifies a device as a ?computer?. We could also try to make a derivative argument, which is that a computer needs to have transistors, but again, there are established computing devices without transistors, with more on the way.

      Luckily for us, The Enabler Argument makes it clear what kind of ?computer? it means: ?fast and cheap?. That rules out anything below, say, a four year old PC. A checkbook, abacus, etc. should be unable to perform any task faster than a piece of modern computer hardware, given the right software. And therein lies a major problem. The software.

      This takes us back to where the whole argument began in the first place, the relationship between software creators and software developers. We can safely ignore the folks who think that we are on the verge of having the group groups suddenly overlap. I?ve been told countless times that ?Application XYZ is going to let users create their own code without a developer needed, with simple drag ?n drop interface (or ?plain English queries? or ?natural language queries? or ?GUI interface? or ?WYSIWYG interface?, or whatever)!? Luckily for me and my career, this magic application has never been written, nor do I see it on the near horizon.

      For one thing, it would require an entirely new way of writing and maintaining computer code. I have been thinking about this quite a bit lately, but I am unable to share my thoughts at the moment for a variety of reasons. Hopefully I will be able to share them in the near future.

      Another problem is the way developers go about thinking about the concept of how users perform their tasks. This also relates to the problem of user interfaces. Developers ask users certain key questions when writing code to help them translate the users? business needs into Boolean logic and procedural code (call it what you will, but even event driven and OOP code is actually procedural code, once the event is fired or method executed). Developers ask questions like:

      If this value is higher than the threshold you have laid out, should I reduce it to its maximum allowed value, or produce an error message?

      Is this data going to be in a list or a spreadsheet format (when a developer says ?spreadsheet? to a user, they really mean database, but users understand the idea of spreadsheets better than databases)?

      Under what circumstances do I need to highlight this data in the output stream?

      These are questions that are needed to be asked to code business logic, and there is nothing inherently wrong with them. But where this model of specification creation fails, is in not understanding what the users? actual needs are. We are finding out how they do their job, but not why. A great example of this is the Microsoft Office Macro Recorder. Sure, the end result may be a bunch of VBA statements, so you could claim that the MOMR turns ordinary users into developers. But look at the code it produces; it creates code that replicates every stray mouse click, key pressed, etc. It replicates how the user does their job. This is why the MOMR code is so useless for all but the most trivial tasks. It does not have any understanding of the user?s intentions. If the user selected a particular column and ten made it bold, the macro simply says ?select column C and then make it bold?. It isn?t even clever enough to reduce that to ?make column C bold? and eliminate the selection aspect of it! Running that macro while the user has another column selected loses the user?s prior selection. A developer will re-write the code appropriately. But even then, the software has no idea why the user made column C bold. This code will always make column C bold every time it is run, even if column C was the correct column to make bold when the macro was run. If that decision was based upon the contents of the column, or maybe the nearby columns, the macro does not accomplish the user?s goals in the least.

      This is where the developer fits is. The developer sits down with the user and asks question like ?under what circumstances do we make column C bold?? Even better, the developer will ask ?under what circumstances do we make any particular column bold?? The best question to ask, though, is ?why are we making columns bold?? That is when we transition from programmer to me ?how? and start programming to meet ?why?. There is an amazing thing that happens at this point in the conversation: the developer may learn at that point that the user?s ?why? is not best served by the ?how? that was requested in the project specifications. The user may say, for example, ?well, we want to column bold, because any bold column indicates poor sales numbers.? The developer could turn around and say, ?it sounds like what you need out of this is a method to identify underperforming accounts, would it be better if we created a second tab on the spreadsheet that contained only the underperforming accounts along with the relevant information to help you see why they are not doing well?? All of a sudden, our job just became a lot more difficult, but at the end of the day, we have a much happier customer.

      It is at the point when software ceases to be written to replicate the how and fulfill the why that we actually answer the questions that started this article.

      Up until relatively recently, software creation was merely a how oriented task. You could throw as many new languages and pieces of hardware as you wanted at the situation, but that is where we were. Only recently has software started becoming a why oriented task. When software is only written to meet the ?how?, then we end up with software that only makes existing tasks easier. When we write software to accomplish the ?why?, we are writing software that allows things to be done that could not have been done (meeting either the literal or the pragmatic variations) through the use of software. The user?s why is not to slightly alter the color code of some pixels. That?s the how. The user?s why is to remove the red eye from a digital photograph.

      When we begin to address the why, we write great software. Bad red eye removal software will turn any small red dot in an image into a black dot, after the picture has been taken. Good red eye removal software will hunt only for red dots in areas that are potentially eyes and turn them into black dots. Great software will first find a face-like shape, then search for eye-like shapes, and then turn the red dots into black dots. The best red dot removal software doesn?t even need to exist on a computer; it would be in the camera! The camera would alter the flash (length, brightness, maybe have a few different flashes that use different spectrums) to use as little flash as possible and maybe even do something to prevent the red eye from even making it into the image file, let alone reach the lens.

      That?s just one (probably not great) example of what I mean. And this is where we are seeing the user/developer gap. The developers are simply too far away from the user?s, in so many aspects, that they have a very hard time understanding the why of the software. I am certainly not claiming that the develop needs to know how to do the user?s job (it certainly helps though!) or that we find a way to make tools to allow user?s to generate software (impossible to do well at this point). I am saying that developers need to work to determine the user?s why as long as the how, and figure out how to code to the why as opposed to the how. Some developers (including myself) are lucky enough to be writing software that will only have a handful of users. We are able to have these conversations with our users. Imagine trying to write a word processor that meets the potential why of every potential user! Some users use word processors as ad hoc databases like grocery lists. Others use it to replicate typewriters. Others don?t even create content, they use a word processor to review and edit content. And so on and so on. From what I have read about Office 12, I think Microsoft has the right basic idea: make the interface present only the options that are contextually relevant to what the user is trying to accomplish. But it still isn?t anywhere near where it needs to be. It is still working with the how.

      File systems are another great example of a system that meets how but not why. When computers were fairly new, the directory tree made sense. The information contained within the directory names and file names themselves was a form of metadata. ?Documents/Timecards/July 2005.xls? is obviously a spreadsheet from July 2005 that is a timecard document. This met the how which was the filing cabinet metaphor, that was how these things were done for 100 or so years prior to the computer. The why was ?identifying and finding my data?, which is what the directory tree structure does not accomplish very well. In reality, the user would be much better served by a system that would assign certain properties such as ?Timecard?, ?Document?, ?Spreadsheet?, ?pertinent to July 2005?, etc. to a file object. All of a sudden, we can do away with the ?Timecard? directory, and have a ?Timecard? group that contains all documents marked as timecards! That is a very powerful change indeed.

      Again, all too often our design and development process gets sidetracked by how and not why. How many times have we as users said something like ?I think a drop down box for that information would be great?? We think a drop down box is best, because that?s how we saw it done somewhere else for something similar? not knowing that there may be a much better interface widget that we simply were not aware of. As developers, how many times have you heard a customer dictate architecture details to you to the point where you wonder if it would be easier to just teach them a programming language and let them do it themselves? This is because the user/developer conversation is focused on how. As a developer, in the most metaphoric sense possible, it is my job to translate your why into a software how. That is it. Writing software that accomplishes what you need to do better than how you are doing it now. And that is the crux of the problem. If the user?s current methods are so great, why are they asking me to write software for it? When we begin to address the why instead of the how, we are no longer replicating what may be an inefficient process (or taking what is a great process when performed manually, but inefficient as software), but helping to create a whole now process appropriate for the medium.

      A large part of the problem, from my viewpoint, is that programming languages are still how oriented. There is no why in writing software. Some languages are a bit better than others. Perl, for example, with its ?Do What You Mean, Not What You Wrote? attitude is a better one. C is a very bad one. C will be more than happy for you to alter a pointer itself instead of the bits it points to, when anyone reading the code would understand that you wanted to alter the code itself. This is at the heart of my complaint with AJAX. People are using AJAX to replicate rich client functionality, with tools that were simply never designed to do that. Of course it will be sloppy. I won?t go into the technical details at this time, I have been over them a number of times already. But AJAX, and many other Web applications are replicating what is frequently poor rich client how to begin with, and compounding it by inventing a why that does not exist. The user does not use a word processor because they need to be able to type from any computer with an Internet connection. Thus, a Web based word processor does not address any actual why, but creates a bad one, then replicates a horrendous how in the process, using bad tools to boot.

      I have a lot more thoughts on this topic, and what changes need to occur for developers to become empowered to address the why while not sweating the details of how. Stay tuned.

      • #3265630

        Programming

        by fred mars ·

        In reply to Programming “How” versus programming “Why”

        Wonderful thoughts!  Great article!

        The main problem I see with developers moving from “how” to “why” is that many people are not willing to invest the time, patience, and effort for asking “why” even if it will give them increased benefit in the end.  It’s a similar issue with the idea of documenting everything as you go along in the project, rather than rushing to put out incomplete and incorrect documents at the end.  This is a fundamental change in the mindset of the people who are involved, and many people are unwilling to allow such change in themselves.  This is not only a “developer” problem, but potentially impacts almost all facets of our lives and relationships.  Dealing with the “why” before getting into the “how” can provide benefits across the board, but is a tough sell to most people.

      • #3265559

        Programming

        by dogknees ·

        In reply to Programming “How” versus programming “Why”

        Good article.

        Pretty much agree with what you’ve said. It is an enormous battle to get people to think outside their immediate experience. But I agree we need to do so to go forward. This has to come from both sides, it isn’t possible for it to all be done on the developers side without any investment (of time and intellectual effort) from the client. I battle this every day. I’m in an environment where the line “You know what we need” is often all the spec I get for a new development.

        This is very much what I’ve been railing against in my recent posts. There seems to be no acknowledgement from business that this is a shared problem. It’s not something the IT industry can fix without that input and support from our clients.

        I’m a bit wary of the idea that some new language is going to help significantly in this. Remember all programming languages are capable of exactly the same results. Turing proved this quite a while back. All different languages do is provide differents models of the same thing. Some models may be easier to understand for some people, but that doesn’t change the intrinsic limits of what can or can’t be done in that language.

        I agree also with your comments on web-based apps. They should be limited to those situations where you NEED access from any web-enabled device. I also think the whole structure of HTML/… needs to be revised to enable full-function interfaces to be created as easily as they are in a standard development scenario. We’ve spent a lot of time learning the tricks of the Windows interface, lets leverage it, not throw it out. I like draggable/sizeable objects that respond instantly. That is the base standard for any interface for me. By all means go to a web interface, as long as it is at least as fast and responsive.

        Regards

      • #3265517

        Programming

        by staticonthewire ·

        In reply to Programming “How” versus programming “Why”

        The perfect client workstation consists of a system unit and a keyboard with JUST ONE KEY.

        All the user has to do is tap that one key all day – the software is SO SOPHISTICATED that it knows what the user means at ALL TIMES…

      • #3265397

        Programming

        by apotheon ·

        In reply to Programming “How” versus programming “Why”

        I’ve linked to this weblog entry from off-site, in a weblog entry titled users as programmers. If you read it, I’m sure you’ll quickly grasp why I linked it, but in short, I was inspired in part by this entry to write about programmers who develop applications for their own use, as opposed to those who do so only to satisfy project specs handed down from on high.

        In any case, I thought you might want to be made aware of the referral.

      • #3265690

        Programming

        by rayjeff ·

        In reply to Programming “How” versus programming “Why”

        This is a very interesting article. I guess I have to say that I’ve been lucky to have started my developing career by doing more of the “why” or understanding more of the “why” along with the “how”. The first developing project I’ve worked on was a great experience because the users of the application I designed are basically computer-literate, as far as being able to turn a computer on and maybe emailing. But using computer applications…NO. So with “given” the task of desiging a custom application for that kind of environment, having the users along the way, every step of the way helped alot.

        There were time when I would walk back and forth from office to office to make sure I understood what the requirements where in order for me to translate into the application.

        After reading the article, I have a question. Should a programmer/developer try to anticipate as much of a user’s needs as they can? An systems analyst I worked with one time told me that and I tried to rationalize that. Maybe because I haven’t been a developer long enough to see why that statement has merit.

      • #3106069

        Programming

        by tillworks ·

        In reply to Programming “How” versus programming “Why”

        Couldn’t it be argued that rational vs. agile development represent approaches to programming how vs. why, respectively? If so, it seems that agile development is the best approach to producing better software. The problem I’ve found with implementing agile development is that it’s difficult to give a client a firm quote for a project because it’s a figure-it-out-as-you-go-along approach. So we revert to the rational approach by creating a detailed, comprehensive spec before any development begins. That spec is inevitably focused on the “how”. And I think the software delivered frequently does not offer the end-user productivity gains it could have.

    • #3262907

      Grateful that I don’t “mashup”

      by justin james ·

      In reply to Critical Thinking

      So (no surprise here), Google is ramping up to start monetizing their map system.

      There are a lot of companies and people pushing this “Web 2.0” idea. Never mind the fact that none of them seem to be able to define it (is it AJAX? is it cross-site use of APIs? is it simply a deep version of hot linking? does it involve lots of words that sound vaguely like English? does it involve another stock bubble?). Phil Wainewright over at ZDNet goes even further; he talks about “Web 3.0” (when companies learn to monetize the undefined “Web 2.0”). If he’s right, than Google is leading the charge to Web 3.0.

      People like Dave Berlind push the idea that the Internet rocks, because anyone can start a new business using a “mashup”. Yes, it is true that you can put together a very interesting website, maybe even a complete application as a “mashup”. You might even be able to monetize it. But to think that you are going to build a system for free out of other people’s data is insane. Imagine if each time you booted up a RedHat Linux system, your computer took up a bunch of RedHat’s bandwidth. Eventually, they are going to need you to pay for that bandwidth, whether it be by direct payment, or through advertising dollars.

      This is the problem with “mashups”. People are using a free service to try togenerate revenue. How long do you think that service will be provided to you and your customers at no cost and with no ads? And when the ads come, do you want the ads that seem to appear on your site to be under the editorial control of a different company? Let’s say that you are running a religious website, and you have a map showing the location for a sermon about “Removing Lust from your Heart”. Google graciously puts the locations of the nearest adult novelty shops within a 50 mile radius of your church on the map. Just what your visitors wanted, I’m sure.

      Call me silly, call me crazy, but I would laugh if I were a venture capitalist or investment banker and heard a pitch that relied upon a third party keeping their service free and ad-free. I would say, “look pal, you’re being set up, the moment a critical mass of companies rely upon that third party, the third party will tell you ‘give us money or get stuck with ads’, you’re doing business with unknown costs, and you think I’m going to give you money?” And anyone who starts a business involving actual money on a “mashup” is a fool, or naive, or worse. Thanks, but I’d rather pay up front, and know what my costs are for the data, and be able to use it my own way.

      Don’t get me wrong, “mashups” can be neat, they can be cool, but I would never in a million years consider using one as a business or as a tool for my business. Personal site? Sure. Non commercial/non profit site? Sure. Business? No way.

      J.Ja

      • #3165843

        Grateful that I don’t

        by rayarosh ·

        In reply to Grateful that I don’t “mashup”

        You’re right, if my business model  was based entirely on using Google free maps API that would be foolish, but you need to ask yourself what’s the alternative.  If you have a localized mapping system ok maybe you could buy the maps and the satellite images and host them youself at an extremely high bandwidth cost, not counting the high cost typically associated with any sort of quality up to date digital map.

        Now let’s say you want some sort of nationwide mapping system, the server space, cost of maps, and bandwidth that would be required would be unbelievable, all for a yet undeveloped application that may not even gain traction with consumers.

        What Google Map Mashups allow you to do is beta test you app with real maps on real servers and see what happens.  You can see if people like your mashup that shows you bars within 1 mile of hospitals, or houses outside police coverage areas.

        If it works then you can build your own infrastructure, or since Google really is just in it for the money, work out some kind of licensing or revenue sharing idea.

         

    • #3087404

      Web-based apps: WHY?

      by justin james ·

      In reply to Critical Thinking

      Lately, the idea of “How” software works as opposed to “Why” users use software has been on my mind (http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=184332&messageID=1983228&id=2926438). Anyone who thinks that Web-based apps should take over the world is being naive at best. Web-based apps virtually never address the “why” that users use computers. How is having an application Web-based going to increase the user’s satisfaction or their productivity in the slightest? I can think of dozens of reasons why Web-based apps are more frustrating, less easy to use, less functional, and less productive than a desktop app. I cannot think of any pain points that a Web-based application relieves for 99% of the applications out there.

      Why would I even want to bother with a central, Web-based repository for my documents like this? The free 64 MB USB thumb drive that one of my customers gave me stores enough for 99.9% of users to put every document they will ever need on, and nearly every computer on the planet can open and edit an Office document, whether it be Microsoft Office, Open Office, etc.

      In addition, what third party vendor do I trust with a sensitive business document, to the point where I’m going to permanently leave my data on their servers? Umm… none. If I have a VPN on my network at work (and nearly every corporate network does now), the problem is solved right there. At the worst, I can call someone and have them email me the document, or I can VPN into my network and open a Remote Desktop session to my desktop PC.

      I have never heard a user state their needs as “I need to do XYZ through a web browser.” I have heard “I need to be able to do XYZ from any computer I might find myself sitting at”, but if I wrote a well written application that did not need to use an installer, and is small enough to fit on a USB thumb drive, I have accomplished that exact same goal without having to even involve a network! [Addendum 4/3/2006: sMoRTy71 points out (correctly) that USB drives aren’t perfect solutions. This is an example of ways to not have to write a full Web-based app; another alternative would be to offer this same software (no installer needed binary) as a download. The point is, there is zero reason to write a Web-based version of an existing desktop application.]

      These vendors of Web-based applications are not listening to their users, or even trying to put themselves in the shoes of their users. They are listening to the self congratulatory babblings of venture capitalists, Well Street investment bankers, techno-nerds for whom technology exists for the sake of technology, and so on and so on. Technology is a lever, nothing more, nothing less. If that leverage is not being applied to solve the problems the user has, it is a waste.

      Every obstacle that you put in the user’s path hurts their productivity. Making an application depend not just upon a local application, but having a constant Internet connection, the uptime of a third party’s webserver, the performance of said server, and the end user having a PC that meets the requirements (including web browser configuration, a HUGE “if”) just throws more obstacles in the user’s path while giving them a second rate application that will ALWAYS be slower than native code looking at data on the LAN. This is “progress”?

      This is the Object Oriented Programming Model gone to a hideous extreme. Let’s just abstract everything, stick half the parts somewhere else, stop caring what is actually happening on the backend, and call it a day. I categorically reject this mode of thought. It does not help the user one bit. It just creates a nightmare for me (the developer), the user, and the people trying to keep the network locked down. Everyone a Web-based app touches is made unhappy.

      Think ?mashups? are so hot? The real question is “if someone made this data store available to me on my local network, would my users be best served by having a ?mashup?, or something running locally?” The answer is almost always “a desktop application built to order”. Software development isn’t about “cool”. It isn’t about “nifty”. It isn’t about what language you are using, or whether or not you are coding to spec, or whether or not you followed “eXtreme Programming” or “Agile Programmer” or “Ten Thousand Monkeys with Ten Thousand Compilers Programming”, it’s about meeting the needs of the users. To think anything else is a bad case of “developer hubris” and even worse is a waste of your employer?s money and your user’s time.

      Web-based applications, except for rare occasions, does not address the user’s needs, it strokes the egos of those involved with the development process. Google’s ego is swollen to the size of a small country. They think because their fanboy base is so big, and because so many tech writers adore them (c’mon ZDNet, how many “Google Watch” type blogs do you need? Where is the “MySQL Watch” or “Oracle Watch” or “IBM Watch”?) that they are actually any good. The truth is, their software is in perpetual beta and exists merely for the sake of suckering people in to develop a critical mass of users, so Google can then plaster ads all over it.

      Do you really want Google’s servers indexing your critical business documents and injecting their ads throughout them? When you give a map to your church’s special sermon about “Removing Lust from your Life”, do you want ads for every adult novelty shop appearing on that map (http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=184332&messageID=1987268&id=2926438)? When you send an email to your therapist about tomorrow’s appointment, do you want your browser showing giant ads to help you find a website for treating whatever it is that you’re in therapy for?

      I have yet to see functionality in a Web-based application that could not (and usually does) exist in a desktop application. Everyone is going ga-ga over ?mashups? involving Google Maps. I hate to share a little secret, but Microsoft has offered this functionality through MapPoint for years to desktop users as a simple-to-program ActiveX/COM object. The only thing that using Google Maps over MapPoint gets me is that it is free. Yes, that may be a big deal for many many people, but Google does not give anything away for ?free?. What they mean by ?free? is ?you pay us with your users? attention and gestures?. Do you want your users leaving your application or being distracted by Google?s ads? Neither do I. I am honestly surprised that Steve Gillmor (who understands ?attention? and ?gestures? better than anyone else) would support a mechanism where the attention and gestures are going to a third party in a process than actually subtracts value from your application.

      Most users are fed up and frustrated with their existing desktop applications, which is why people are looking for a solution. The solution isn?t to start moving everything to the Web by simply replicating the existing, crummy apps with AJAX, but for programmers to change their mindset when writing code. Web-based, desktop, thin client, it?s all the same lousy software if the developers aren?t addressing their users? needs.

      J.Ja

      Want to see who’s next On the Soapbox? Find out in the Blog Roundup newsletter. Use this link to automatically subscribe and have it delivered directly to your Inbox every Wednesday. http://nl.com.com/MiniFormHandler?brand=techrepublic&list_id=e131

      • #3087304

        Web-based apps: WHY?

        by smorty71 ·

        In reply to Web-based apps: WHY?

        “I have never heard a user state their needs as “I
        need to do XYZ through a web browser.” I have heard “I need to be able
        to do XYZ from any computer I might find myself sitting at”, but if I
        wrote a well written application that did not need to use an installer,
        and is small enough to fit on a USB thumb drive, I have accomplished
        that exact same goal without having to even involve a network!”

        I don’t think you’ll ever hear a user state their needs as “I need to do XYZ through a USB thumb drive” either. Come on, are you really suggesting that writing install-free apps to fit on a thumb drive is a better alternative to having a web app (for those users who need to access the app from anywhere)? Where is all of the data for the app going to be? Are you really going to put company data on a thumb drive which could easily be lost?

        You also present all of these scenarios for computers that might not have the specs to run a web-app, but never mention that users *could* find themselves on computers without USB ports.

        If you don’t like web apps and you don’t like Google, just say so (actually, after this post, you really don’t need to say so). But saying that local apps running on USB drives (with company data on them) is a better/safer way to solve user needs is just silly.

      • #3087268

        Web-based apps: WHY?

        by wayne m. ·

        In reply to Web-based apps: WHY?

        Availability and Maintenance

        Although I agree that web-based applications are sorely lacking in a lot of operational areas, client-server and client-only applications are difficult to make available and maintain.  This is the driving force behind web-based applications.

        Applications with client resident software require a software installation.  This limits that availability of the software.  One can either choose to issue every employee a license and CD-ROM of each program  that he might ever need to access, distribute updates as they become available, and have each employee install the updates as they become available and be responsible for resolving conflicts on his machine; or one can have a web-based application that is compatible with the top one or two browsers and have the application available to any employee at any time.  The CD-ROM version also runs into problems when one is at a client site and cannot load software onto the available machines.

        There is definitely a need to make software widely available without license concerns, a need to minimize incompatibility due to multiple versions, and a need to minimize the dependency on the hardware platforms.  For one large, geographically disperse customer, software updates for one custom application were limited to one or two a year because of the expense.  One update cycle took almost 2 years because an underlying third-party component required a field-wide memory upgrade.

        Contrast that situation with a recent web-based development effort.  We made a development web-site available to our user group to evaluate monthly versions and provide feedback.  We were able to incorporate user recommended changes in a timely manner that would not have been feasible if we had to constantly distribute the software and have the users install it.  As a further example, would you even consider using TechRepublic if you were required to install the software on your client machine?  Would you faithfully install updates?

        There is value to the corporation and to the end user to making software widely available with limited restrictions based on the location and specifications of the machines it will be run on.  Unfortunately, the web browser is insufficient to provide the base capabilities needed to run remote software.  AJAX is merely the latest patch being laid on top of a poor foundation.

         

      • #3106509

        Web-based apps: WHY?

        by aaron a baker ·

        In reply to Web-based apps: WHY?

        Damn that was a “Good”, excuse my language.  🙂 

        But you’ve hit so many nails on the head I don’t kow where to start.

        My biggest beef was the ridiculous amount of space {250mg} in HOTMAIL. I wouldn’t trust Microsoft with even the slightest of secrecies, now I’m supposed to use them as my E-Mail data base. They who through their contracts, you kow,the ones you are “Forced to sign?”, promise to keep your information Private and then without telling anybody went out and “Sold” that very same info to people so that today we get tons on junk mail, in our hotmail inboxes and MSMessengers. You can’t got to hotmail and download and save “On your own Computer any of your mail, it must be saved at Microsoft.” I don’t like to be Bullied, hence my Outlook Express.

        Oh they have “Filters” BIG DEAL ! ! . They should never have sold the information in the first place. They abrogated their contract with us. This is only one sample of what using the net as a data base can end up looking like.

        As for the Info Thief / Hog, “GOOGLE”, well I won’t go there except to say, that I am absolutely amazed at how many people were taken by these this scavenger. It amazes me that people don’t see them for what they really are and are too busy jumping on the eye candy to even look up. Someday, the piper will come, then these people will know.

        Yahoo the same thing, as a matter of fact, they don’t even let you use your own browser.

        So you see, I couldn’t agree with you more. There is no way on this planet that Web based apps are anywhere near the quality,security and dependency than those that we have on our own systems.

        I am a Private man, and so I’ll just continue to avoid the aforementioned like the scurrilous plague that they are and I shall laugh when the complaints about Google and Yahoo start pouring in.

        The only reason this hasn’t started so far is that people don’t know………Yet !! 

        I close in saying Thank you

        For writing and excellent article and point out what people, especially we in the know, “should have known “.

        Web Based Apps are not good at all and I for one will not ever use them.

        Warmest Regards 

        Aaron   😉

      • #3106438

        Web-based apps: WHY?

        by wayne m. ·

        In reply to Web-based apps: WHY?

        Do Not Confuse Web-Based Applications with Application Service Provider Model

        Although the vast majority of Application Service Providers (ASPs) use Web-Based Applications, the majority of Web-Based applications are not provided by ASPs.  Most Web-Based applications are used and maintained by the same corporate entity.

         

      • #3106406

        Web-based apps: WHY?

        by mark miller ·

        In reply to Web-based apps: WHY?

        I agree up to a point, from the user’s perspective. Web-based apps. are not as responsive as thick client apps. I’ve used OE and a web mail client, and I can manage my e-mail so much faster with OE (or any thick mail client for that matter). It’s my understanding that the impetus for web apps. was that administering thick client apps. was getting too expensive. So it wasn’t necessarily an end-user decision, but an IT management decision.

        From my perspective the main advantage web apps. have is that IT managers can control who has access to their application. It’s more difficult to do this with a thick client, since an end user literally has a copy of it. A web app. just essentially provides access to a service. If an employee leaves the company, it’s a simple matter to deny access to that person. With a thick client, it’s not so easy. Even though it’s illegal, I’ve heard stories about employees taking thick client apps. with them when they leave and then using them for a competitor or to start their own businesses. In a thick client project I’ve worked on, they implemented a time clock system, such that the app. gets a key from a server, which allows access to the thick client for a limited time. Once it expires, the app. no longer operates, until the key is renewed. This isn’t the best solution, but better than no limit on application use. With a web application this would be a simple matter. Just take the user off the central database of valid users, and the user is denied access then and there. I and the company I work for have discussed converting their app. to the web for this reason, but they said they need employees to be able to run the app. without an internet connection. So there you have it. A client/server app. that required the user to sign in every time they used it would work just as well, IMO.

        There are technologies available now from both Sun (JRE, Swing) and Microsoft (.Net Windows Forms) that make it possible to deploy thick client apps. with the advantages of web deployment. Personally I hope this catches on. As an application developer I don’t particularly like web apps. They have their good points, but overall I think they’re more of a headache, more expensive to develop. I think web apps. do well when an application is data-driven, such that controls on the screen can be added dynamically, depending on the situation. Web technology makes it relatively easy to do this, since formatting is done like a word processor would do it, and you don’t have to worry about everything being aligned since the HTML renderer does a lot of that work. That’s the only good point that I see with it from a developer’s perspective. Other than that, state management is a pain.

        Thick client apps. have the advantage that so long as the user is running the app., the app. is continuously running. You can save values to a variable and be done with it, and be assured that you can access those values later, anytime you want. With the web, you either have to save these values to session state, to a database table, or serialize it between page refreshes, and then get them back out again the next time the app. does a postback. It’s better than it was 6 or 7 years ago, but it’s still clunky. The other thing that I never get over is the fact that unless you’re using a technology like Flash or AJAX, every time the user posts back a page, the ENTIRE page is being refreshed! Even if the user has just entered text into 3 textboxes and then hit “submit”, the entire page of content goes back to the server, and then back to the browser. How inefficient! Thick client apps. never need to do this. They can just send/receive the necessary data.

        Just my 2 cents.

      • #3106384

        Web-based apps: WHY?

        by dave lathrop ·

        In reply to Web-based apps: WHY?

        I agree that not all applications should be web applications. They have their pro’s (easy deployment, data kept on server, etc.), but also a lot of con’s.

        However, I find the greatest problem is people who don’t design good applications! Regardless of whether this is a web app, desktop (VB, C#, C++, Java, COBOL, or whatever), client/server or scripted in a host (MS Access or Excel). Too many times, I see CRUD screens (excuse me “forms”, they just look like 3270 screens) for each table. This rarely matches the user’s task, so they are left to figure out what sequence of screens and CRUD operations will achieve their current goal. A well designed and constructed application with any implementation technology that supports the users’ work and usage patterns will always beat a poorly designed and/or constructed application that makes their life more difficult.

      • #3106232

        Web-based apps: WHY?

        by kenfong ·

        In reply to Web-based apps: WHY?

        web is not perfect but it does solve issues like deployment, upgrade, access-anywhere, cross-platform support, etc. Companies can subscribe for what they need only and they don’t need to keep a team of people just to maintain a proprietary software. Why spend money on big machines, storage, backup, and people when the core of your business is selling bananas. Leave them to people who are good at those things, and let them bare the cost of an n-tier infrastructure and server clusters. Let users call them 4am in the morning, instead of asking you where the USB port is.

        Thick client software does a better job in certain areas such as scientific projects, graphic stuff, software development, etc. It’d be cool though if some of these tasks can be done over the web – so I don’t have to worry about applying service pack #58 will screw up my video editing software, and forced to install dx9.1b and dnet2.0 because a little backup tool requires them.

        The web sucks in areas like response and interface. But the idea of thin-client computing is de facto.

      • #3106187

        Web-based apps: WHY?

        by driverjoe ·

        In reply to Web-based apps: WHY?

        I agreed that we don’t need or even want web app. but it’s not about what is wanted or need! It’s about Subscriptions! Why have you pay once for a program when they can have everybody pay monthly for something nobody needs. Now that’s Innovative.

    • #3106745

      Micro distributions are NOT micro kernels

      by justin james ·

      In reply to Critical Thinking

      All too often, I read a tech writer who seems to think that a Linux (or some other OS) that takes up a small amount of disk space is a “micro kernel.” This is patently absurd. These are micro distributions. Most Linux and BSD based operating systems already have an extremely small kernel, small enough to fit on a floppy disk along with enough configuration files and utilities to get a system up and running and perform a restore.

      If you compare these “micro kernel” OS?s like Damn Small Linux and PuppyLinux to a full blown Linux distribution like RedHat or SuSE, what you will see is that they really are no different. In fact, they are usually just a stripped down version of those “fat” OS?s! All that they have done is remove every piece of code that the distribution makers view as “unnecessary”.

      Indeed, you should be able to take the rc.d and other configuration files from one of these “micro distributions” and put it onto RHEL or SuSE or Mandrake or whatever, and have that OS run in an identical memory and CPU footprint as the “micro distribution.”

      At the end of the day, a “fat” OS is only “fat” by virtue of how much disk space it consumes. Most of these “fat” OS?s are simply loaded down with fifteen different open source versions of the same type of application. This is where we see RHEL or SuSE have more lines of code than Windows. Windows ships with one web browser, one media player, one basic text editor, etc. whereas a *Nix will ship with five different applications for each task and let you choose which one you prefer.

      I have tried working with these “micro distributions” and to be honest, it is a pretty miserable experience. I have tried Damn Small Linux as well as PuppyLinux. It was extremely frustrating to not be able to install any software that was not available as a binary package through the system?s installer, simply because there was no compiler available. *Nix without a compiler is like having a car without a steering wheel. PuppyLinux did have a Perl package, but it was missing most (if not all) of the standard Perl libraries, rendering it useless to most programs (and without a make command and/or CPAN, very difficult to add new packages) that need it. PuppyLinux did not have cron (the standard *Nix task scheduler) so it was difficult to see it being used as the basis for any serious server application.

      Anyone who thinks that these “micro distributions” are going to make a huge different has not actually tried doing anything with them that are trivial tasks on a “fat” *Nix. As stated before, the only difference is how much disk space they take up. If an extra gigabyte of disk space is going to make that much of a difference to you, you are in trouble anyways. And if you think that these “micro distributions” are going to help you reduce system resource requirements, you are wrong there too. Sure, many of them are compiled with a 586 or even a 486 target in mind, but at the end of the day, all *Nix distributions are pulling their kernel from the same place (within families, of course).

      J.Ja

      • #3286131

        Micro distributions are NOT micro kernels

        by jmgarvin ·

        In reply to Micro distributions are NOT micro kernels

        I have to disagree. These micro distros are great for thin clients or for that older desktop that just doesn’t have the power to push “full” modern distros.

        I’ve actually setup DSL on a thumb drive and had it boot from there…this created a portable thin client that could plug in anywhere. Had I actually thought about it, I could have modified the distro to authenticate me back to the domain so that I could do my admin tasks or have it authenticate the users onto a different domain so they could do their user tasks.

        These micro distros have their place…You typically don’t want a compiler on a desktop machine, so this makes sense.

    • #3106337

      Improving the code writing process

      by justin james ·

      In reply to Critical Thinking

      Writing code is a goal oriented process. Unfortunately, the tools that developers have do not assist them in attaining their goals. The tools are getting better, (as someone who has had to write COBOL in vi, I can attest to that), but they still do not understand just how programmer operate. The development tools themselves are still how oriented, not why oriented. Let us take a look at how the code writing process hinders rather than helps developers.

      The documentation is predicated on the user knowing what they are looking for.

      This is only improving, because the IDEs have glued ToolTips, AutoComplete, etc. into the editors. Coding now is a process of naming your variable, pressing the period, and then scrolling through the list of methods and properties to find what sounds like it does what you want.

      But try starting off from a state in which you do not know what objects you need. In other words, try something you have never done before. You are in deep trouble. Language and API documentation is still dominated by how and not why. It assumes you know what object class (or variable type, or function, or whatever) you need, and then shows you what your options are. This is required information, but it is not very helpful, especially if you are not familiar with that language’s terminology (or that library’s terminology). It is so easy to not find what you are looking for, if the language has standardized .ToString() for everything, but what you are working with has .ToText() instead. More to the point, there needs to be more documentation like the Perl FAQs: goal oriented documentation.

      The Perl FAQs are perfect. There are hundreds of “I am trying to do XYZ” items in there, and code that shows you exactly how to do it. The documentation asks the user, “what is your why and how can I help you accomplish that?” I use the Perl FAQs more than the actual reference most of the time; I already know the language syntax, but there are a lot of whys that I have not tried to do. Indeed, the Perl documentation contains so much usable code in a goal oriented layout that it is possible to write 75% of a program out of them. Just try that with Whatever Language In a Nutshell. I have only seen one programming book laid out in a “let us accomplish something” format as opposed to a “here is how we work with strings, here is how we work with numbers” format.

      The tools are too focused on writing code.

      I know that this is counter-intuitive. IDEs all about code, right? Well, not really. Writing code is the how. The true why is “creating great software.” Writing code is simply an ends to that means. The reality is that too many pieces of software simply stink, not because the internal logic is no good, but because the programmer left out things like error handling, input validation, etc. out of sheer laziness or ignorance. An IDE that lets you try doing an implicit conversion when you have strict on with a strongly typed language is doing you no favors, especially if that block of code is somewhere that it only gets accessed once in a blue moon. A language or IDE that makes input validation “too much hassle to bother with” is not doing anyone any favors.

      Here is a great example: too many web applications rely upon a combination of JavaScript and the maximum length specification in a form object to do their validation. Unfortunately, not everyone has JavaScript turned on, and many people use some type of auto complete software to fill out a form. And someone can always link to your application backend without replicating your interface. So no matter how much input validation you do on the client side (not saying you should skip it; users typically prefer getting the error before the form is actually submitted to a server), you still need to do it on the backend. Sadly, the concept of tying the input validation logic on the server side to the input validation on the client side is still pretty rare (ASP.Net with its Validator controls is good, but not great). So you end up with code that either is a hassle for the end user (no JavaScript validation) or vulnerable to all sorts of nasty things to occur (no client side validation), or you are forced to write all of your validation code twice, in two different languages.

      This is all a by product of the sheer amount of effort that is needed to write this code. It is not brain work, it is drudge work. A well written program with a large amount of user interaction but little complex logic behind it, in a language with large libraries, can be 25% input validation. Let’s be real, most applications are of the form “get data from a data source, display it to the user, allow the user some C/R/U/D functionality, and confirm to the user that the procedure was a success or failure.” That is all most programs are. A significant portion of security breaches are caused by failure to validate input. For example, Perl has a known buffer overrun problem with using sprintf. “Everyone knows” that you need to validate user input before passing it to sprintf, to ensure that it will not cause a problem. And either through laziness or ignorance (note how I put “everyone knows” in quotation marks), this does not happen, so you get a web app that can execute arbitrary code. The WME exploit, zlib problems, et al all boil down to a failure to validate input.

      Imagine if instead, the IDE (or the language itself), instead of being aimed at providing you with fancy indentation and color coding and what not, actually did this on its own? Perl does this to an extent with variable tainting; it will not let you pass a variable that came from the user with certain functions until you touch it first with other functions. More languages need a mechanism like this. But it is not enough. The idea that user input is always clean needs to be erased from the language and the tools, and replaced with a system that encourages good coding practice, through compiler warnings, and even better yet, handling it for you. Imagine if your language saw you taking the contents of a text input and converting it to an integer input, and had the good sense to automatically check it at the moment of input to ensure that it would convert cleanly? That would be a lot better than it is now; trying the conversion, catching an exception, and throwing an error back. This lets the programmer focus on the why, in this case, getting numeric input from the user.

      Program logic is a tree, but source code is linear

      This is a problem that I did not even see until very recently. Very few programs are written procedurally at this point. The event driven programming model has taken over, and for good reason. Unfortunately, our entire source code creation process is derived from the procedural days. Look at some source code. What you see is that you have a bunch of units of code, all equal to each other. Even when working with object oriented languages, the tools themselves treat the code writing process as a linear, procedural system. You write an object; within that object, all method are equal within the code. Navigating the code is tricky at best.

      Even with an IDE that collapses regions, functions, properties, etc., when the code is expanded, it is still a plain text file. The way we have to write overloads is ridiculous. The whole process itself is still stuck in the procedural world, but we are writing for event driven logic. The tools simply do not understand the idea that some blocks of code are inherently derivatives or reliant upon other blocks of code. Too much code serves the purpose of meta data to the rest of the code (such as comments, error handling, function parameters, and more). It does not have to be like this, but it will require a major shift in thinking, both by the people who create the tools, and the people who use them.

      Code writing is too separate from the rest of the process

      Right now, the tools for completing a software project are loosely integrated at best. Even with the major tool suites, the tools within the suite are not all best of breed, and the better products just do not integrate well into the suite. For example, it would be pretty painful to write a VB.Net Windows desktop application in anything but Visual Studio. Even a simple ASP.Net application would be a hassle to work with outside of Visual Sudio. Sadly, Visual Studio’s graphics tools are crude at best. Its database tools are not so hot either, especially for database servers that do not come from Microsoft. Adobe/Macromedia makes excellent graphics editors. But Photoshop, Illustrator, etc. simply do not acknowledge that Visual Studio exists. So the tools that the person making the graphics is using (Photoshop, Illustrator, Freehand, Flash, and so on) have zero awareness of Visual Studio, and vice versa. The graphics person has to do his work and then pass it to the programmer and the GUI person so they can see how it fits.

      Microsoft is trying to address this problem with the upcoming Expression system, but I am not holding my breath. I will believe it when I see it. This creates a problem where the graphics artist does not realize that their vision cannot be implemented within the code. The systems architects have a hard time seeing that their detailed database layout is nearly impossible to turn into a usable interface. The project manager does not get an idea of just what is needed to make the workflow go smoother. And so on and so on.

      It is great that the tool makers have brought testing and version control into the process. This helps tremendously. But these tools still are not perfect, and could use a lot of improvement, particularly version control. At this point, version control is still a glorified document check in/checkout system with a hook into a difference engine. It has no awareness of the program itself and it is still very difficult for multiple people to simultaneously work on the same section of code. Even then, as one person makes changes that affect others, the version control system is not doing much to help the team out. I worked at a place that used CVS; the system was so complicated that we barely used it. For what little it did, it was not worth the effort. Version control, even in a single developer environment, is a major pain point. I have some ideas on how to improve this, but this is not the time to discuss them.

      The situation is not as bleak as I paint it

      I know, I make it look like it is a wonder that programs get written at all. It is not quite so bad as that. But I think that it is time that the tools that we use to create software evolve to meet the why of the code writing process, as opposed to making the how easier. There are a lot of great things about the current tools, and I would not go back to vi and command line debuggers for any amount of money. But I also think that the tools that we have need to make a clean break from their past, and help us out in ways that they simply are not doing at this point in time.

      J.Ja

      • #3106330

        Improving the code writing process

        by jmgarvin ·

        In reply to Improving the code writing process

        I’d like to point you to a book from a really great guy named Allen
        Stavely.  It’s called Toward Zero Defect Programming.  This
        will really open your eyes and let you see that programming CAN work in
        almost any environment, it just takes a little take on things.
        http://www.awprofessional.com/bookstore/product.asp?isbn=0201385953&rl=1

        On a side note: I don’t think programming in this method is any more
        expensive than not using it.  However, I firmly believe that the
        long term costs are FAR greater if you don’t use this process.

      • #3106117

        Improving the code writing process

        by wayne m. ·

        In reply to Improving the code writing process

        Why to How Will Remain a Manual Process

        I guess I do not see any technology replacing humans in understanding the “why” of an operation.  There is simply too much visual, and to a lesser degree, audible information to process.  People are by far the most efficient means of processing diverse information.  Unfortunately, many of the processes put into place serve to isolate the developer from the sources of information.

        As to some other points, I would put the amount of custom code developed for data validation at a much higher level than 25%, and find that in many systems it is repeated in three places: client, server, and database.  I say repeated, not duplicated, as it is rare that precisely the same validation is performed in all three places and this problem is augmented because each place uses a different coding language.

        I will quibble slightly on the comment concerning procedural development.  Code is packaged, more or less, in an object organization, but I think most development is done (and should be done) procedurally.  An event is just a different means of launching a procedure (interrupt driven versus polled).  The lack of a standard mechanism to disable events or define critical regions has lead to its own share of problems.  Back to my original point though, newer agile development methodologies are returning the focus to procedural development away from the object development of the late 1980s.

        I feel the definition of the process of writing code is in its infancy.  To date, code development has been treated as a mysterious blackbox and the focus has been on adding more and more restrictive controls before and after the actual code development.  This will require developers, however, to actually work with the users to assimilate the “why” of the workflow.  Developers will not be able to sit in a dark room with specifications handed to them.

        Back to the original point.  People are simply the best available means for processing and consolidating information.  Understanding the why will remain a manual process, at least through the end of my working career.

         

      • #3285370

        Improving the code writing process

        by tillworks ·

        In reply to Improving the code writing process

        What about the O’Reilly “Cookbook” series? These stay near the top of my stack because they take the “why” approach. I’ve learned a great deal of “how” by searching for “why” solutions in these references.

      • #3103718

        Improving the code writing process

        by dogknees ·

        In reply to Improving the code writing process

        This won’t be a popular comment, but mine rarely are!

        One of the issues with all this is that people don’t seem to realise that writing software is a fundamentally difficult thing to do. Anything more than the trivial requires significant intelligence and effort to achieve a succesful result. This is unlikely to change anytime soon. Like many other things in life, it cannot be made simple enough for the average joe to understand.

        Learning a new system is no harder now than it was 20 years ago. You still pretty much need to read all documentation cover to cover two to three times before even starting to use the product. You just aren’t going to learn it in any easier way. Learning part of a system is pointless, you need to whole picture to make rational design decisions.

        This probably sounds pretty arrogant, but I don’t believe it is. The simple truth is that there are things in life that only some people are capable of doing at all, let alone doing well. Saying this is not arrogance, it’s reality. What percentage of people are capable of a 2 metre high-jump?. To try and dumb everything down to the point where every person can do it is pointless and bound to fail. The world is just not that simple.

        So, some positive ideas. The industry needs to recognize the difficulty of the task and reward it accordingly. If it’s harder to write good code than manage the company, then you pay the coders more than the managers. Attract smart people in the usual way, by paying them decently. Stop treating coding as an entry level task that most will move on from to management. Make it the goal.

        As regards Tools. The manufacturers need to start applying some of the effort they apply to user software to developers tools. Simple example. I do a lot of development in Excel/VBA. One thing that often happens is that you name a range and then use that name in your code. If you then rename the range for some reason, the code is broken. Why usn’t the system smart enough to work out that I’ve used thay name and fix my code? It’s certainly possible, but does M$ do it? Never!

        That’s one simple low level example where the system could assist. It won’t make development a no-brainer, but it will alleviate the load and help us produce more stable software.

        One area where some progress is being made is in UML and related systems/products. If you work within the “system”, you can automate a significant part of the coding process and bring the problem definition closer to the customers view of the world. They can give you more useful information if they understand the specs and the process  better.

        I still think the quest for a silver bullet that makes development an automated process is going to be a long one, but progress is being made. I guess the hardest thing for many of us is keeping up with this progress, or more accurately even getting to know it’s happening. Reminds me of something about alligators and draining swamps! It’s very easy when your heads down, to see what’s giong on around you.

        Personally I believe that we will develop true machine intelligence sometime in the next 20 -40 years. Then maybe, we’ll have automated developers. The job won’t have gotten easier, but the “tools” will be MUCH smarter.

        Best Wishes to All for Chocolate Day.

         

      • #3103706

        Improving the code writing process

        by tony hopkinson ·

        In reply to Improving the code writing process

        I certainly agree that there is far too much emphasis on how as opposed to why. Proprietry systems and their certifications have exacerbated this problem in my opinion. Equally education seems to have gone more how oriented, I’ve worked with more than a few recent graduates who have not been taught why we are, where we are.

        They don’t know why structured methodologies were invented, they don’t see the progession to OO. They don’t see the realtionship between the procedural (linear) and event based model. The basics which I learnt, because that’s all there was are missed out or at best superficially covered. Worse still and indicated somewhat by your post is that they are somehow no longer important.

        The ability to write code, is a particular way to think. Some people can do it, some can’t. Same as some are mathematically gifted, others geometerically. I can see how IDEs could improve, but any attempt to control how a developer develops is another constraint in the process. So a more ‘intelligent’ IDE could help in terms of focus, imposition of standards, but as a developer it is necessary to step back from the detail, sometimes that standards preclude what you wish to do for perfectly valid reasons.

        A piece of software to ‘write’ code writes code in the ways that designers think you should. Were they correct, will they always be correct. Who knows, I guarantee that if we went down this route with yet another phase of dumbing down of the discipline, we would suffer even more bad design. If you don’t have the talent that is the ability to program, you will never be any better than the designer of the tool that stands in for the mising talent.

        Think of it this way, if you were to sit down and write a program to play chess, how much better would the program play the game than you do yourself. Certainly it would never miss fool’s mate, but it would never come up with a new gambit either.

        Chess is much simpler than the programing.

         

      • #3287259

        Improving the code writing process

        by jefromcanada ·

        In reply to Improving the code writing process

        I agree with most of what you’ve said.  But I see the problem not as being “how vs. why”, but rather a “simple” problem of abstraction.  Regardless how many libraries, functions, and standard procedures there are, programmers by their nature wish to reinvent things so they have their own stamp.

        Whether it’s reinventing the UI or the processes, there is an emphasis on the “nitty gritty” that you refer to as the “how”.  There are a few tools that generate standardized code based on requirements documentation.  Most also allow you to tweak the generated code.  The more you tweak, the more you drift back into the “how”.

        My current tool of choice is from the TenFold company.  Their tool removes all the “nitty gritty” programming chores from the implementation process, focusing instead on requirements building and rule-setting.  In an innovative twist on things, their product “renders” an application, just as a spreadsheet program “renders” a spreadsheet.  By building all the housekeeping tasks (like screen creation, menu generation, data validation, security implementation, data access, etc.) into the rendering engine, designers get to concentrate on the “why” (as you put it), and let the rendering engine create a completely standardized, secure, and fully documented running system.

        Caution:  The TenFold company is very small, and currently very cash-poor.  I don’t know how long they can continue as a viable entity.  Therefore, my comments refer to the quality and innovation of their product, and do not constitute a recommendation with respect to the company.

    • #3285920

      Anticipation and program design

      by justin james ·

      In reply to Critical Thinking

      A recent TechRepublic discussion focused upon the idea of anticipation in program design. I answered that the designer should not try to anticipate the limit of the user’s needs, and that the software should try to anticipate the user’s actions. What exactly does this mean, and how does it relate to my recent theme regarding how and why?

      Regular readers will be aware that I advocate the idea of the software developer viewing a project as a method of fulfilling the user’s ultimate goals, not necessarily writing the software that the user originally had in mind when they requested the software. Look at the common spreadsheet. It started as a means of replicating what accountants, bookkeepers, and other number crunchers were doing with ledger books. Now it gets used not just for that purpose, but as a quick, easy to use, lightweight database. If the developers of software had it designed so that all it could do was perform basic numerical computations on columns of numbers, it would not be very useful at all. It would have fulfilled the original design request (replicate with software what was done with ledger books), but would not have met the true why. The real why was “we need to be able to put data into a Cartesian coordinate system and perform operations upon that data.” As users push spreadsheets beyond their intended purposes, the software developers add in functionality to address the new uses, which allows even further innovation by the users.

      This is one reason why I am such a big fan of interpreted programming languages. Interpreted languages allow the developer and the user to quickly expand functionality beyond the original specifications in ways that were never imagined. A piece of software that exposes its functionality to a macro or scripting language (always an interpreted language) is always more useful than a program that is compiled with whatever functionality the developer put in and is unable to do anything else without talking to the developer. This is not to say, of course, that all software should be written in interpreted languages, of course, but that they should support the use of an interpreted language within the software itself. Providing the user with a simple macro language within your application directly addresses the idea of not anticipating the limits of your users’ needs.

      One reason that HTML and HTTP are abused as application platforms is because the interpreted nature of many of the application servers (Perl, PHP, ASP, JSP), as well as HTML and JavaScript itself lead to instant results. It takes a small amount of time to crank out a pretty interface, only a little longer to get some sort of dynamic functionality going on, and everyone is happy until the trudge work of actually writing a quality application sets in. This is partly why VB got such a bad rap for so long; someone with little to no experience could spend a day making an interface through drag/drop, and then power it with totally garbage code on the backend. Before VB, Delphi, etc., you had to spend quite some time and know the Windows API fairly well just to get a basic window on the screen.

      Especially when working with an interpreted language, it is extremely easy to separate the business logic from the presentation logic, even in a desktop application. I discussed a potential project today with my boss. As we analyzed the user’s why (one user program), one thing that jumped out was that the user would want to have us make wide scale changes to the business logic as their needs changed, and that providing them with an entirely new installation with each change was going to be unrealistic to do. The solution that we are going to propose? The application itself will be written in VB.Net, but all it will do is pull data from the database and expose core functionality to a Perl interpreter that will eval() the contents of an encrypted file. The end result? Business logic can be edited and altered without requiring a full recompile/installation, and users with minimal programming skills should be able to make minor changes themselves. That is handing power to the user. They do not want to call us for every minor change, and we do not want to support them for every minor change.

      Especially when developing software that will be used by a wide and diverse set of users, it is vital that the developer not attempt to anticipate the limit of the users’ needs. Indeed, it is equally important when software is aimed at a very small, specialized set of users. The “large audience” software such as graphics editors, office suites, Web browsers, email applications, etc. can be used by so many diverse sets of people, that it is impossible for the developer to conceive of every possible way to use them. For the “small audience” program, the users tend to be extremely specialized and have their own particular way of working; what may be perfect for one user will be absolutely worthless to another user.

      On the other hand, the application itself should respond to what the user is doing, and anticipate their needs. This is, in many ways, a matter of interface design. A piece of software that responds to a user’s attention and gestures the moment it recognizes a unique pattern is one that will be more useful than one that doesn’t. Photoshop, for example, shows not just a thumbnail preview of what the changes will look like as you adjust the values of the tool you are using, but also shows the picture itself changing. This saves a lot of time; instead of adjusting the values, clicking “OK,” then having to undo the change and try again, you know if the results are going to be what you want before you even click “OK.” It would be great if Office suite software did them same; mouse over the “Bold” button? Make the selected text bold while the mouse is over the button, and un-bold it when they stop hovering over bold. That will let the user see if bold is really what they want before they commit to it.

      Prefetching data is another great way that software can anticipate user’s needs and deliver extra value in the process. If the user is paging through a large data set, go ahead and start loading the next page’s data in the background (resources and bandwidth permitting of course, no one likes an application that makes a computer slow when you aren’t doing anything). The user will see instant results when they click to the next page, instead of waiting for the results. This is one reason why Web-based applications have much less potential than desktop applications; their mechanisms for caching stink, even when using AJAX methods.

      Even the programming tools we need fail to anticipate. Developer’s tools are unique; they are written by the very people who are the intended audience. Yet, they fall very short in terms of anticipation. Version control is especially bad at this; it does not let you know that another person changed code that your code relies upon in a way that breaks your code until you refresh your code. It also does not notify that person that they are about to break your code. This is a situation which causes a lot of problems.

      I love the idea of code that anticipates the user’s needs. This is one direction that I see Steve Gillmor headed in with the Gesture Bank. If developers can access an aggregated source of user’s attention and gestures, they can write software that reacts as soon as the user begins to take action, not when they finalize the action. Really smart developers will have the software develop that knowledge on the fly on an individual basis. And that will let the user focus on why they are using your software, and not how they are using your software.

      J.Ja

      • #3286261

        Anticipation and program design

        by wayne m. ·

        In reply to Anticipation and program design

        Anticipating Needs or Understanding Current Workflow?

         I believe I am in agreement with your intent, but I would like to suggest a change in terminology away from “anticipating needs.”

        When I see the term “anticipating needs”, I usually envision a developer adding a new feature just because the developer thinks it might eventually be useful.  I do not think this is the intent of the discussion.

        I see the current discussion reflecting the need of the developer to understand the current workflow of the user and anticipate the next steps.  This is not about identifying new needs but is about understanding existing needs and work sequences.  If one understands the workflow, then it becomes possible to prepare for a following step while a current step is in progress.  This is bascially the application-level version of instruction caching on a microprocessor.

        I am a strong advocate of the developer understanding the users’ current operating environment.  I do not believe in having the developer predict future functional needs.  I trust thisis in agreement with the article intent and would recommend we avoid the term “anticipating needs” as that may carry a different connotation with some readers.

    • #3285679

      Ironies abound…

      by justin james ·

      In reply to Critical Thinking

      I am currently attending a “webinar” presented by MySQL and Spike Source (both are companies making money on open source). The “webinar” is about using open source Content Management Systems. The joke is that the “webinar” is being presented over Microsoft Office Live Meeting. I guess they are not yet ready to put their money where their mouth is…

      J.Ja

      • #3105765

        Ironies abound…

        by mschultz ·

        In reply to Ironies abound…

        I think they are, most CMS’s are built on an MySQL database. To me It doesn’t matter how they decided to do a webinar, open source solution or not. Doesn’t strike me as ironic at all.

    • #3264448

      If you want Windows, buy a PC, not a Mac

      by justin james ·

      In reply to Critical Thinking

      As I have said in the past, the dual boot story on Macs just is simply not very compelling.

      There is tons of buzz out there about Boot Camp. A lot of people are saying that this is the best thing for Apple since OSX. Others are saying that this is the worst thing for Apple since the Newton. Personally, I think this is a non-story.

      Dual booting simply is not very useful! Both Mac OSX and Windows XP tie an extraordinary amount of their functionality to the file system. To dual boot, and have your data be usable on both platforms would mean that you are going to be putting your data on a monster FAT32 partition, and giving up the advantages of NTFS for the Windows XP installation, and HFC+ for the Mac OSX installation.

      Furthermore, dual booting is a huge pain in the butt. Do you really want to interrupt your workflow, shut everything down, reboot, wait until that special moment to hold a button (to tell it to boot to the alternate OS), and re-login, just to start one application? Neither do I.

      The one thing I see as being a Good Thing with Boot Camp would be to have a small XP partition used for gaming. This would finally allow someone to own a Mac and actually play a game. Not only that, but when starting a game, it is common practice to shut down every possible application in order to provide the game with the maximum possible system resources. Not many people multitask with a game either. The idea of giving Mac OSX the ability to play Windows XP games is a good one, as this is a major reason why many people will not go with a Mac.

      On the other hand, do you really want to spend $100 on a Windows XP license just to play games? It is one thing to spend $100 on a piece of hardware to make your gaming experience better. It is another thing entirely to spend $100 to gain the ability to play games on a hardware platform that pound for pound is already significantly more expensive than a Windows PC to begin with. I can put together a decent gaming PC for around $800. That is only a tad bit more expensive than the cost of a Mac mini. And a Mac mini, even with the new Intel infrastructure, is hardly a gaming machine. Its graphics capabilities are not outstanding, its sound capabilities are not outstanding, it is only 1 GB of RAM for that price, and so on and so on. If you are talking about taking the Intel version of a PowerMac and playing a game on it, fine. But for the price of a PowerMac, you could be putting together The Ultimate Gaming PC (assuming you are not being stupid and convincing yourself that you need 500 watts of power supply to drive your computer). And even then, you would probably be better served by buying a Mac mini, a decent gaming PC, and a KVM switch.

      Unless someone has a lot of money to burn in the quest to play Windows XP games on a Mac, there really is no reason to be dual booting into Windows XP from Mac OSX anyways. When the Mac mini first appeared, I investigated the possibility of switch to the Mac platform for my day-to-computing. What I discovered was that every single application I used my Windows XP computer for either had a Macintosh version, or an equivalent that is just as good. The only reason why I have not made the switch yet is for financial reasons. My at-home computer usage simply is not very complex or dependent upon a PC-only application. Indeed, with Mac OSX being able to run FreeBSD software quite easily, there is a large pool of free, open source software that often does the same thing as PC software, and often just as good. Outside of games, business environments, and software development for Windows XP, I just cannot find any reason why anyone needs to boot into Windows XP versus Mac OSX. And even then, the drawbacks are miserable. In fact, in a business environment, you effectively would not be able to use Mac OSX at all. So the Boot Camp software really does not help Apple penetrate businesses, and only a few people will be able to productively make use of dual booting.

      J.Ja

      • #3264446

        If you want Windows, buy a PC, not a Mac

        by steven warren ·

        In reply to If you want Windows, buy a PC, not a Mac

        Have you tried it yet? I have several friends who said that when they tried to put their xp key in during installation, it didnt work.

        -ssw

      • #3264415

        If you want Windows, buy a PC, not a Mac

        by justin james ·

        In reply to If you want Windows, buy a PC, not a Mac

        No, I have not tried it. I do not have access to a Mac, and even if I did, I wouldn’t try it. I have gone the dual boot route a number of times in the past, and it was always an unpleasant experience for the reasons outlined above. At this stage, for me to try dual booting a Mac would be like if I stuck my hand on a hot burner on the stove and did not expect to be burned, just because everytime the stove burned me it was a different burner.

        J.Ja

      • #3264366

        If you want Windows, buy a PC, not a Mac

        by georgeou ·

        In reply to If you want Windows, buy a PC, not a Mac

        Dual booting is the jack of all trades and master of none. Simultaneous booting with hardware virtualization if the jack of all trades and master of all.

        No one wants to dual boot, but everyone will want simultaneous boot IF it’s packaged to be friendly. The ability to flip instantly between OS X and Windows Vista which are both installed on top of XenSource is extremely compelling. For maximum performance, it would be even better if they allowed you to prioritize the OS with Focus or even pause one OS in favor of another. If Apple goes as far as simultaneous boot, it will have a VERY compelling argument and a huge differentiator over any other hardware manufacturer as the jack of all trades and master of all trades.

      • #3286428

        If you want Windows, buy a PC, not a Mac

        by somsubs ·

        In reply to If you want Windows, buy a PC, not a Mac

        Would be useful for those who only need to run one or two non native applications to the main operating system. Mac or windows are both capable of running most of the things you need.

      • #3103813

        If you want Windows, buy a PC, not a Mac

        by apotheon ·

        In reply to If you want Windows, buy a PC, not a Mac

        I’d like to add a big fat “ditto” to what George Ou said. Simultaneous operation would be a good thing to have available. It’s not the right answer most of the time, but it’s a far better answer than a dual boot setup most of the time.

        There are instances where you’re better off with a dual boot system, but they’re pretty rare. For my purposes, I’d usually want either two separate machines or Linux running Wine for Windows application compatibility most of the time, but I could see a virtualization environment being occasionally useful, especially if I can use that to cut down the number of machines I need to run closed source proprietary OSes.

        Just this last weekend I shoehorned a Linux install into a brand-new Thinkpad to make a dual boot system for someone. It would have been easier to set up a virtualization environment from scratch, but alas, I needed to keep the current Windows XP install on the thing, so I got to resize the NTFS partition and install Linux on what was left instead.

    • #3103767

      MSN adCenter Review

      by justin james ·

      In reply to Critical Thinking

      MSN adCenter: WOW. I just acted on an invitation to MSN adCenter on behalf of one of my customers. We decided to try to enter the beta, since the Overture/Yahoo! ads are randomly not showing on MSN Search when MSN tests their new system, typically late at night. Since the site sells consumer goods, it is just as important that ads are showing at 9 PM as it is to have them up at 9 AM.

      After the initial login, I was brought to an interface that just completely blew me away. Google AdWords and Overture/Yahoo! Both have cluttered interfaces. Google has an especially poor interface. The MSN adCenter interface is clean, fast, easy to use, well marked. I just cannot say enough good things about it. I have not reviewed all of its features, but it looks like it has everything that Google AdWords and Overture/Yahoo! have, while being easier to navigate.

      Additionally, my customer in the MSN adCenter program is in love with the pricing structure, which is better than how they feel about Google AdWords or Overture/Yahoo! They are currently spending $150/month with Google AdWords, and they are hitting their daily budget cap by about noon everyday. We are not currently able to perform log analysis (the logs are not available through their current web host), but we are pretty sure that the clicks from Google AdWords are pretty useless. Indeed, we are pretty sure that many of the clicks from Google AdWords are actually click fraud, but without the logs there is no way to tell. They have the opposite problem with Overture/Yahoo! The bids are so cheap with Overture/Yahoo! That they are only incurring about $7/month in charges. But Overture/Yahoo! Has a $20/month minimum spend limit, so they are being charged for clicks they never get. Part of the problem is that this website is a Top 10 listing in the organic results for both search engines anyways for nearly every keyword they can think of. I know, it sounds crazy that being in the top ten search results (even for generic terms!) can be a “problem”, but as far as Overture/Yahoo! is concerned, it is! With MSN adCenter, there is a $5 signup fee, the minimum bid is only five cents, and you get charged at first in $50 increments (or every 30 days, whichever comes first); the $50 increment slowly goes up as your time in the program increases. There is no minimum spend limit, so you only pay for clicks you get. My customer is delighted.

      My only complaint so far with MSN adCenter is the system for importing keyword information. It took me about fifteen minutes to find a link to the spreadsheet, and when I found it, it did not match the examples or the otherwise well written instructions.

      The help system is very well done as well. It feels like a Microsoft Office application, by shrinking the browser window to have the Help window fill the remaining space, with a tight interaction between the two windows.

      I was also surprised that MSN assigned us a representative to help walk us through the process and evaluate our needs. The representative is actually calling us by phone; it is nice to hear a voice instead of sending emails and waiting for two days like you need to do with Google or Overture/Yahoo!

      Overall, with the exception of the speed bump caused by the import spreadsheet, I am extremely impressed with MSN adCenter, and look forwards to exploring it in depth as my customers’ needs require me to.

      J.Ja

    • #3105513

      Is Apache inherently more secure than IIS?

      by justin james ·

      In reply to Critical Thinking

      Richard Stiennon at ZDNet argues that Apache is inherently less vulnerable to attacks than IIS, because it makes less system calls over the course of serving an HTML page, and is therefore less vulnerable to things like buffer overflow attacks. The argument, while have some prima facie appeal, is specious. Let us examine in depth the truth about what he says:

      Both images are a complete map of the system calls that occur when a web server serves up a single page of html with a single picture.

      It is odd, but I cannot remember the last time a Web server was exploited on basic static HTML serving functionality. Why? Because there is nothing to attack! The serving of static HTML pages simply does not leave room for a buffer overflow, because the server is not running any arbitrary code; all it is doing is mapping the URI request to a local file, and streaming the file to the client with the appropriate HTTP headers at the top. That is it. How are you going to attack that, except for attacking the method that the server uses to process the headers, or maybe getting it to serve a file it should not?

      The more system calls, the greater potential for vulnerability, the more effort needed to create secure applications.

      I can agree with this. Except there is one little problem: Apache cannot be compared to IIS! Take a close look at what Apache does, out of the box: it serves static web pages. CGI is disabled by default. Even if CGI were to be enabled any vulnerabilities at that point are not in Apache, but with whatever is fulfilling the CGI request. IIS, on the other hand, has all sorts of functionality built into it, such as running ASP scripts, .Net applications, and so on and so on that Apache cannot do without the aid of third party (or non-default) extensions. What does the system call tree look like for the entire LAMP stack compared to the Windows/IIS/ASP.Net/SQL Server stack? I bet they look much more similar. Sorry pal, but you are using an apples-to-oranges comparison when comparing IIS?s system calls to Apache?s.

      Furthermore, how often does the Web server itself get attacked? Not nearly as often as the applications running on the Web server. Poor programming habits (such as not properly validating data, misuse of routines line printf() on input that was not validated, and so on and so on) are the cause of Web application vulnerabilities. There are not many Web server vulnerabilities out there now, or ever.

      Poor systems administration is another source of common attacks. I don?t care what OS you are running, when you have your Web server running as root or Administrator because that is easier than properly setting up permissions, you have a problem. A Perl script that is running as root outside of a chroot jail is much more of a problem that even the naughtiest ASP.Net application running on IIS as a restricted user. Period.

      Ignorance and laziness are the root cause of the vast majority of security breaches, not the server?s OS or application stack. PERIOD. No OS or Web server in the world will protect you if a programmer sticks the input from a Web form into the WHERE clause of a SELECT statement against a SQL injection. No amount of anti-virus or anti-whatever will help you if you have a sys admin who lets the user upload a file to an area outside the acceptable area and then execute that file while the Web server runs as root. No firewall will save you if the programmer uses a function with a known vulnerability on data that has not been scrubbed.

      Those are the facts. Mr. Stiennon, I suggest that you learn the facts. You may not be a journalist and just a blogger (I am assuming by that, you mean ?I write subjectively, not objectively? which equates to ?this is my opinion, not fact?), but you still have a responsibility as a representative (employed or not employed) of a publication that is well regarded.

      J.Ja

      • #3287212

        Is Apache inherently more secure than IIS?

        by jaqui ·

        In reply to Is Apache inherently more secure than IIS?

        IIS has asp support built right in?

        really?

        that’s why my neighbor who hates open source products gave up in disgust at getting an asp page served from iis and went with apache.

        IIS was incredible difficult to get asp page support working in.
        apache getting php or perl support is simple in comparison.
        [ this according to a person who adores Micorsoft and won’t use open source software if it’s possible to avoid it. ]

      • #3287159

        Is Apache inherently more secure than IIS?

        by justin james ·

        In reply to Is Apache inherently more secure than IIS?

        Jaqui –

        I cannot vouch for your friend’s experience, but yes, IIS does have ASP (and ASP.Net) support built right in, and the last I checked it was a non-removable part of IIS (unfortunately).

        I do agree, Apache is very easy to get up and running, as well as adding extensions like PHP, Perl, etc. to it. But that ignores the basic premise of the blog, which is that IIS does more from the get-go. That is an objective statement, not a subjective statement. IIS is slowly headed towards the level of modularity that Apache has. Apache’s modularity is awesome. But I stand by the contents of that part of the blog: when you add enough extensions to Apache to provide it with abilities equivalent to IIS’s base functionality, it will make just as many system calls and be just as complex and prone to programmer error. Whether or not someone can figure out how to use IIS’s base functionality or not isn’t the point. I have had my own share of struggles with IIS, and know that it is not always a very pleasant system, and have found myself wishing I could just run vi /usr/local/etc/iis/iisd.conf or something to work with it…

        J.Ja

      • #3287092

        Is Apache inherently more secure than IIS?

        by merwin ·

        In reply to Is Apache inherently more secure than IIS?

        Strange – last time i checked a bufferoverflow is possible everywhere
        that a memoryarea is moved from one place to another, especially the
        places where it intersects the stack?

        Kim

      • #3287049

        Is Apache inherently more secure than IIS?

        by phirephanatik ·

        In reply to Is Apache inherently more secure than IIS?

        Wow, what an ignorant post. Contradictions all over the place and obviously someone that just doesn’t understand the premise of the original argument.

        Regarding the calling of a static html page and an image, that’s what you might call a “baseline reading”. If you’re testing the web server itself and the OS, you’re not going to load anything special like php. That introduces all sorts of other variables into the picture, such as how php (or asp) is handled. This is comparing the webservers, NOT the things that can be plugged into them or put on top of them. And depiste this blogs naive opinions, there are plenty of ways to attack a web server. I see it in my logs all the time. They go after IIS ’cause it’s a very low bar, obviously. There are all sorts of hacks that you can do to just the simple web server, and that’s all this was testing. An HTML doc and an image is a fair baseline measurement of how the web server goes down into the OS and back again.

        You argument about what apache comes with as opposed to IIS is a poor one as well. There’s not much in the way of apples to oranges in this comparison here. The request was for the same html file and the same image file. If IIS is leaving vulnerable all these holes that aren’t even being used, then that’s yet another flaw in IIS. Long ago, we (except for Microsoft) learned that in the world of security, you turn it off by default and when you need it, then you turn it on. I guess the author regularly runs a machine that answers on all ports without a firewall based on the poor understanding displayed here. Would you mind posting your home IP address? I think you might have some friends that want to find you. As stated above (and in the original article which you don’t seem to understand), this is comparing the WEB SERVERS, not the middleware apps. If IIS is firing off its CGI handler to fulfill a simple HTTP GET request, then there’s something seriously wrong with IIS (not that we didn’t know that already).

        The web server itself is attacked far more often than this author would like to think. Can we at least get people writing here that are in the business? Sheesh. That’s why anyone that’s serious about this does doesn’t run IIS. It’s been flawed since day 1 and has not become any better. If there were no flaws in the web servers, then I guess Apache would be complete. Done. Never has to be touched again. Ditto IIS. Since there are no web server vulnerabilities, it never has to be fixed again (assuming it had been writting correctly the first time). However, in the real world and in real life, this is wrong, just like most of the poster’s comments. Patches for web servers are a way of life, just like any other piece of software. There don’t have to be a lot of vulnerabilities in a web server, there just has to be one. That’s all it takes. And the point with this spaghetti mess that is called IIS on Windows, finding and fixing any single vulnerability is obviously going to be a huge mess and it will be very hard to tell what other ramifications the system will have when things are so poorly organized.

        The next two arguments about poor system administration are irrelevant to the topic at hand as nowhere in the original article did it make any comment as to how well the systems were maintained, etc. These are typical windows user responses to try to make themselves feel comfortable about how crummy windows is. Looks like the author more or less copied page 1 out of any security textbook. If you really want to get into it, though, the Apache way is very much better because the patches come much faster. We’re starting to see more and more often people other than Microsoft writing patches to fix Windows since MS can’t fix it fast enough. You’d think that a company with virtually unlimited resources could fix critical vulnerabilities in a reasonable amount of time, but they can’t… and now we have a very much clearer idea of why.

        The author of this needs to learn where the facts really are (Microsoft is not where you “Get the Facts”, hello). Stiennon is right and you are wrong. An organized code base helps you find and fix errors promptly, and also helps you to avoid spaghetti errors in the first place. That’s the whole point of this, and it could be applied to anything, be it web app development (hello php, asp), software development (ie, firefox), OS development, or what we’re seeing here. This is common sense, nothing new. Further, the author of the original article that this author tries to hard to lamely bash isn’t even making any new assertions. Anyone that’s been out there for more than a week knows that you don’t run IIS webservers. Ever. You’re just asking to be hacked. This is what we call “old news”. All the original author did was show us why IIS is in such horrible shape.

        I can’t believe J.Ja wasted all this time writing such a load of garbage. None of it has anything to do with anything. Please, guy… think before you write. I doubt you will be, but you should be embarassed to have signed your name to this.

      • #3104887

        Is Apache inherently more secure than IIS?

        by a. kem ·

        In reply to Is Apache inherently more secure than IIS?

        It might be that Stiennon is also trying to put fear into our hearts and boost sales of Webroot software – especially since they don’t have a workable Linux solution 🙂 Enable DEP and lots of these problems introduced by the traditional code injection into data space are thwarted by NX anyway, but that’s another topic altogether.

        You’re right on target though. If you make an apples-apples comparison, you’ll end up with vastly different (and comparable) results in the respective call trees. Applications are where the likely vulnerabilities are now, not so much the OS or services. Laziness and/or ignorance is the source of most problems.

        Furthermore, simpler isn’t always better. Would your trade your trans-oceanic flight on a 777 for a flight on a Super Constellation? Heck, the 777 in-seat entertainment system is more complex than the entire Super Constellation. Because something is easier to understand or appears simpler, doesn’t mean that it is more reliable (although there is a certain intuitive appeal to the notion that simpler is better).

        Good discussion. There should be more of it.

        A. Kem

    • #3104986

      No data is safe, not even data you already validated

      by justin james ·

      In reply to Critical Thinking

      I was recently subjected to an interesting new type of malicious code a few days ago, and I wanted to share it.

      A friend of mine asked me to help him with a little bit of PHP coding a few days ago. He understands a bit about programming, but never did it for anything complex, and does not do it very often. He sent me the script he had written, and I started tinkering with it, and the more I started playing with it, it soon ended up being a ground-up re-write of his code.

      The script itself had a fairly simple logic to it: parse the Apache access log, find other pages that have referred visitors to this particular page, and create a link page with the number of referrals. When doing some debugging, though, something odd happened. I was dumping the output of the raw log to the browser, and all of a sudden I was redirected to another web site!

      Digging through the log file, I found the culprit: a site spider had put a chunk of JavaScript within <SCRIPT> tags to perform a redirect as its User Agent header. Obviously, someone had figured out that many people use Web-based log analysis tools, which will show you the user agents. I am grateful that the site which I was redirected to did not contain any malicious code of its own.

      What made this attack extremely interesting to me is that it did not actually attack a particular piece of software, nor did it care what OS I used or anything else. All it needed was for someone to run code that did not validate data. Indeed, it is a very common developer misperception to assume that data, once it is in a database, is clean and does not need to be validated on its way out of the database. This is the real lesson learned here. I can put all of the input validation I want into my program. But if someone else’s software also accesses the same database, and does not properly validate data, I might as well not be doing validation at all, if I assume that the data is valid when I use it.

      This is another example of how ignorance or laziness on the programmer’s behalf can become a major catastrophe. Imagine if a piece of software had been written before JavaScript had been introduced, and was still in use? The programmer would not have even known to be able to prevent this kind of attack.

      This is yet another reason why I am down on Web applications; it is the only system that I can think of in which input by one user is presented to another user in a way that the second user’s computer will parse and interpreted and maybe even execute the first user’s input, outside of the control of the developer. In thin client and desktop application computing, the programmer has total and complete control over the presentation layer and what occurs there. In Web application, the presentation layer is a complete no-man’s land. There is no telling what will happen there. Data that is good today may become dangerous tomorrow if some new technology gets added to the browser and creates a browser issue. One example would be to allow users to post videos online; if there is a buffer overflow problem in the user’s media player of choice, then you (the programmer) are giving malicious users a tool to attack other users. Web services are just as bad, particularly when using an AJAX method that takes your software out of the loop. In those situations, you do not even control the third party website. It could be riddled with problems, and you would not even know it until users are contacting you and asking why your software infected their computer with malware or crashed their computer completely.

      At the end of the day, I was able to complete the script. Naturally, I made sure to strip any and all HTML, JavaScript, etc. from the input as it was being read. But it was a great reminder to me that no matter how many external parsers, validators, etc. that a piece of data goes through, they may not be providing the validation that my application requires. Input that is healthy, acceptable, and possibly even desirable for one program is not necessarily so for another program.

      J.Ja

      • #3103869

        No data is safe, not even data you already validated

        by wayne m. ·

        In reply to No data is safe, not even data you already validated

        I’m not sure that I agree with some of the specific suggestions, though I strongly believe that any system needs to define a data validation and error handling policy.

        Validation of data coming out of a database and having different data validation rules on different systems provide  sources of error and manual rework.  Any time data is rejected, human intervention is required to reconcile the data.  This implies that the appropriate place for data validation is at the user interface to allow data correction to occur with the person most knowledgeable about the data being entered.  Data that is rejected later in processing can only be put into a queue for handling by operations personnel who may lack the knowledge needed to reconcile.

        The same logic leads to the conclusion that applications that share data must accept the same data validation rules.  Data delayed or lost during transfer between systems leads to duplicate data entry and loss of data synchronization between systems.  Applying differing rules ensures manual effort is required to share data. 

        Data validation rules must be consistent and the best way to maintain that is to have a single point for creation, update, and delete of data items; only read capability is shared among systems.  This allows data validation to be implemented and maintained in one location.

        The key is to establish a system-wide data validation and error-handling process.  Having each individual component maintain its own private rules is a recipe for lost data.

      • #3104195

        No data is safe, not even data you already validated

        by tony hopkinson ·

        In reply to No data is safe, not even data you already validated

        If you do not have control over all input you’ve got to validate on output. Not just for web based tools.

        Simple CD collecions database with access, you should n’t write the application assuming that because you carefully validated the input it’s going to be correct on output. You don’t have control, the user does, and they can get at the raw data without using all your careful work.

        Belt, braces, a bit of rope and a regular visual check that your trousers aren’t round your ankles is a requirement.

        Murphy’s law is universal constant, design with it in mind.

    • #3104022

      Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

      by justin james ·

      In reply to Critical Thinking

      David Berlind over at ZDNet recently responded to George Ou’s piece on the possibility of media bias regarding coverage of Internet Explorer and Firefox.

      Let us apply my patented Logic Analysis System on the concept of holding companies to different standards for the same type of product, using some of Mr. Berlind’s arguments:

      “Internet Explorer can be held to a higher standard than Firefox because it has been on the market longer. In addition, Microsoft has plenty of money and can hire the best engineers in unlimited quantities. Microsoft has been making bold claims about IE whereas the Firefox team does not. Therefore, when Internet Explorer has defects in terms of stability and security it is much more of a problem than when Firefox does.”

      Let’s first parse out all product references and replace them with variables:

      “{Product A} can be held to a higher standard than {Product B} because it has been on the market longer. In addition, {Company A} has plenty of money and can hire the best engineers in unlimited quantities. {Company A} has been making bold claims about {Product A} whereas {Company B} does not. Therefore, when {Product A} has defects in terms of {Defect X} and {Defect Y} it is much more of a problem than when {Product B} does.”

      Now, let’s try filling in some new values:

      Product A = “Ford Explorer”

      Product B = “Kia Sportage”

      Company A = “Ford”

      Company B = “Kia”

      Defect X = “engine fires”

      Defect Y = “brake malfunction”

      “Ford Explorer can be held to a higher standard than Kia Sportage because it has been on the market longer. In addition, Ford has plenty of money and can hire the best engineers in unlimited quantities. Ford has been making bold claims about Ford Explorer whereas Kia does not. Therefore, when Ford Explorer has defects in terms of engine fires and brake malfunction it is much more of a problem than when Kia Sportage does.”

      All of a sudden, this type of statement doesn’t sound so great, does it?

      What I find even more amazing is that the Web 2.0 cheerleaders seem to closely overlap the Firefox groupies (as well as the Google bootlickers, for that matter). They want to replace most of userland applications with Web-based applications, but they seem to have no problem if the web browser is filled with problems? It is fine to use a buggy, error prone product as long as it is GPLed? Especially when you want the browser to replace most userland applications? Get real.

      Some argue that since you pay to use Internet Explorer (indirectly though an OS license) and Firefox is 100% free, that Internet Explorer can be held to a higher standard. This is an excellent point, but not completely correct. If you had a choice between paying for IE and not paying for IE (such as purchasing it separately, or as an add-on to Windows) then this might be a legitimate argument. Similarly, if the inclusion of Internet Explorer in Windows played a part in your choice in operating systems, then it could be said that you are paying for Internet Explorer. But in all honesty, if you chose Windows for reasons that have nothing to do with Internet Explorer, you did not really pay for it; you got it as gravy.

      A lot of people also emphasize that Firefox is a new product, and where it is at this stage of development (better than Internet Explorer on some things, more features than Internet Explorer, not as good for other things) is amazing considering its age. This is a completely bogus statement! Firefox actually comes from an older code tree than Internet Explorer! How is that? Firefox is actually the Mozilla Web browser at heart, sans the Mozilla suite (originally it started off as a lightweight browser, but it is now just as heavy as Mozilla). And where did Mozilla come from? It came from Netscape, when Netscape open-sourced Navigator ages ago. Netscape was on the market before Internet Explorer. So to give Firefox bonus points for its age is simply ignoring history.

      The real truth is, Internet Explorer and Firefox need to be held to the same standard, and that standard is not the one that Firefox is being held to; it is the standard that Internet Explorer is being held to, if not a higher one. It is downright shameful that Internet Explorer still has as many bugs and security holes as it does. I do not know if ActiveX is disabled by default, but it should be. Internet Explorer, to be frankly honest, is a dog. It is not standards compliant, its PNG rendering is messy at best and ActiveX is still filled with security holes, and so on and so on. If Internet Explorer needs to meet this standard, than so should Firefox. I do not consider either one of them to be particularly great products. I rarely use Firefox, except to test cross-platform compatibility, so I cannot truly judge it as a browser. But from all of the reports I read regarding its stability, I think I would prefer to avoid it for the time being. When it comes to web browsers, my personal choice is less features, probably less secure (I do not think we will be able to really judge Firefox’s security for a while longer), and more stability. I rarely go to Web sites that I am not familiar with, I have locked down Internet Explorer fairly tightly, and I most definitely do not go to Web sites of a questionable nature. But I frequently have a web browser open, and cannot afford to have it crashing on me repeatedly.

      There is no extra credit for an engineer who designs a bridge that only partially collapses. There is no curve for an auto maker who has a faulty product that kills only 25% of the people who own the product. There is no bonuses to give to programmers who write code that allows security breaches. Period. It does not matter who you are. So why give one product a free pass (or a reduce fare admission) and not another? Especially when both browsers need a lot more work to be ready to transition to the world of online applications replacing desktop applications?

      J.Ja

      • #3104323

        Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        by tommy higbee ·

        In reply to Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        Reasons why Internet Explorer SHOULD be held to a higher standard

        (Ignoring both the question of a) whether it IS held to a higher standard and b) how well it meets the standard)

        1) It’s part of the operating system, even if you don’t use it.  You can’t uninstall it, you can’t get rid of it. If there’s a vulnerability in IE, you’re stuck with it, and any patches must be applied EVEN IF THE PC NEVER BROWSES THE INTERNET (for example, servers)  Nothing quite like having to shut down and restart 5 separate servers that are part of an application because the latest Windows update for IE came out….

        2) It’s part of the operating system, and any flaws affect the entire OS.  If Firefox crashes, you can just restart it.  IE  crashing is much more likely to require a reboot.

        3) ActiveX:  The single biggest security hole on every Windows PC, the single biggest vector for spyware.  Acitve X controls have every bit as much right to your PC and its hardware as the OS.  (Do we detect a theme here?)  Security for ActiveX controls focuses on only one thing: preventing it from being installed unless you’re sure you want it.  There is no “sandbox”, unlike Java.

        4)  Regardless of whether or not you bought the PC FOR IE, you still paid for IE.  It’s only reasonable to expect more from a product you paid for.

        BTW, while you COULD take the attitude that you didn’t buy Windows for IE, so it’s just gravy, you could just as easily say that you were forced to buy IE whether you wanted it or not (it’s undeniably part of the cost of producing the OS you DID buy).  Which attitude is more right?  If you use Firefox and avoid IE, you might resent having to pay for IE.  In that case, as unwanted software, it SHOILD be held to a higher standard.  Hey, there’s reason number five!

      • #3104269

        Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        by conceptual ·

        In reply to Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        Amen 

      • #3104212

        Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        by tommy higbee ·

        In reply to Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        Reasons why Internet Explorer SHOULD be held to a higher standard

        (Ignoring both the question of a) whether it IS held to a higher standard and b) how well it meets the standard)

        1) It’s part of the operating system, even if you don’t use it.  You can’t uninstall it, you can’t get rid of it. If there’s a vulnerability in IE, you’re stuck with it, and any patches must be applied EVEN IF THE PC NEVER BROWSES THE INTERNET (for example, servers)  Nothing quite like having to shut down and restart 5 separate servers that are part of an application because the latest Windows update for IE came out….

        2) It’s part of the operating system, and any flaws affect the entire OS.  If Firefox crashes, you can just restart it.  IE  crashing is much more likely to require a reboot.

        3) ActiveX:  The single biggest security hole on every Windows PC, the single biggest vector for spyware.  Acitve X controls have every bit as much right to your PC and its hardware as the OS.  (Do we detect a theme here?)  Security for ActiveX controls focuses on only one thing: preventing it from being installed unless you’re sure you want it.  There is no “sandbox”, unlike Java.

        4)  Regardless of whether or not you bought the PC FOR IE, you still paid for IE.  It’s only reasonable to expect more from a product you paid for.

        BTW, while you COULD take the attitude that you didn’t buy Windows for IE, so it’s just gravy, you could just as easily say that you were forced to buy IE whether you wanted it or not (it’s undeniably part of the cost of producing the OS you DID buy).  Which attitude is more right?  If you use Firefox and avoid IE, you might resent having to pay for IE.  In that case, as unwanted software, it SHOILD be held to a higher standard.  Hey, there’s reason number five!

      • #3104197

        Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        by jhilton ·

        In reply to Both Internet Explorer and Firefox Need To Be Held to a Higher Standard

        In the article you state you rarely ever use Firefox, because you’ve read reports of instability? If you were to actually use it on a regular basis, you would discover these claims are completely false. I have been using Firefox on many different machines in many different environments since it’s beta’s, and the only thing that has been able to crash it is Adobe Reader (suprise suprise). If a product is rapidly stealing market share, it must be doing something better.

    • #3286916

      MapPoint is nearly uselessly crippled as an ActiveX control

      by justin james ·

      In reply to Critical Thinking

      I have spent the last three weeks of my life, more or less, trying to do what should be a trivial task: get a map out of Microsoft MapPoint 2004 (the desktop app, not the Web service) and save it as an independent image. There are all sorts of problems with this though!

      It seems that someone at Microsoft decided to cripple the MapPoint ActiveX control, for reasons beyond my understanding. Functionality that exists within the MapPoint software, and is accessible via VBA just cannot be used from outside of VBA, such as saving the map as an HTML file. The solution suggested? Fire up an entire instance of MapPoint and programmatically command it to do a “Save As”. For whatever reason, what is a 1 second operation through the MapPoint application becomes a 2 ? 20 minute (you read that right, 20 minutes for some maps!) operation when run from VB.Net. During this period of time, MapPoint like to glom 100MB of RAM and 100% CPU.

      Every single method that I have seen suggested simply does not work. Even telling it to copy to the clipboard fails after about 1300 iterations (I am looping through a large number of maps). The only thing anyone can tell me about this error is to keep retrying it within a loop and eventually it will work. Code that fails at random is not code I think I want to be using.

      What bothers me about this is Microsoft’s decision behind this. I just cannot fathom why they only included basic functionality to the ActiveX control, especially when that functionality exists within MapPoint itself. Since I need to have MapPoint installed to use the ActiveX control, it is not like they would be losing sales or anything like that by providing full functionality to the ActiveX control. This reminds me a lot of the undocumented Windows APIs that Microsoft used to keep other companies from writing software as capable as their own software. If it were not for the fact that MapPoint costs a fraction of what my other options are, I would be more than happy to dump it.

      This is actually my first time using an ActiveX control of an Office product. Up until now, all of my Office programming has been with VBA within the application itself. Has anyone else had these kinds of problems? If so, I may need to reconsider a few things in how I was hoping to do some future projects.

      J.Ja

    • #3287516

      Geeks and Communication Skills

      by justin james ·

      In reply to Critical Thinking

      It is a commonly held belief that geeks do not need to be able to communicate outside of Nerdland. In fact, it is an outright expectation. Programmers who gets nervous around pretty girls, systems administrators who cannot give a presentation to more than two people at a time, and DBAs that stutter unless they are discussing Dungeons and Dragons are what many people envision when they think of IT professionals. These are all common stereotypes of IT professionals. Sad to say, many IT professionals buy into this idea, and sometimes even actively encourage it!

      I am not going to pretend to be surprised by this. Up until the age of sixteen or so, reaching Level 4 as a bard seemed more important than reaching first base with a woman. Weird Al Yankovich was “romantic” in my mind and a “nice wardrobe” meant a closet full of shirts from hardware and software vendors, preferable ones with multiple years’ worth of pizza stains on them to prove my “authenticity”. I thought that if people did not understand me, it was because they were stupid, not that I was unable to communicate with them.

      Thankfully, I changed. Mostly. I still think Weird Al is funny on occasion, and the ratty shirts are still there (though they now tend to be Metallica and Mr. Bungle shirts from my post-ubergeek years). The biggest change was that my communication skills improved significantly. I took classes in high school such as AFJROTC and Mock Trial that taught me how to speak to an audience, with or without notes. My classes in college (I will merely admit that I double majored in “cannot-get-a-job-ology” which is code for “the liberal arts”) involved few tests, but endless amounts of paper writing. What few tests there were tended to be essay questions. In other words, I was learning a lot about communication skills.

      What does this have to do with the IT industry? Plenty. If you want to know why your manager seems to be a “grinning idiot” with no clue what your job is instead of someone with technical skills, take a look at what that manager brings to the table. That manager is very likely to have an MBA or maybe an MIS degree. Their external learning is probably in “risk management” or Six Sigma, not the Cisco or Red Hat certification you just earned. The manager’s job is to interface between “the suits” and the IT people. The manager does not actually need to know how to do your job if you communicate your needs to him properly. What manager does need to know is how your job relates to the business.

      It has been my experience since I started blogging about IT issues on TechRepublic, that the majority of the time when I receive heavy criticism, it is because I failed to write clearly and properly communicate my message. Sure, there have been instances where someone climbed all over me for using one bad example or analogy in a 3,000 word post, or where someone was obviously unable to comprehend the topic at hand. But by and large, when I receive negative feedback, it is my own fault for not writing clearly.

      At my current position, my manager does not understand much programming (he knows some VBA), systems administration, database administration, networking, computer repair, or any of the other tasks I do. He knows how to run the company, deal with customers, and so on. He really does not need to know the gritty details of what the project is hung up on; he just needs to know how long the delay will be. He does not care what brand of motherboard I buy or what CPU I select; he needs to know the price and business justification for the expenditure.

      Many of the IT people that I have worked with simply do not understand this. They fill a proposal with technical details, and expect the person reading it to understand the benefit of the proposal from the technical information. In other cases, they write an email that is littered with typos and spelling mistakes. These types of mistakes do not help the recipient to understand why they should approve your request or give your project more resources, or otherwise help you with whatever goal it is that you are trying to accomplish. Tailor your message for the audience. If the recipient is a technical person, make it technical. If they are a non-technical person, use language that a non-technical person can understand. As I often do for programs that I have written, I pass it through the “Mom test.” In other words, I ask my mother to review it. She is about as non-technical as it gets. If my mother can understand what I have written to the point where she can make an educated business decision, then it is a good communication.

      Many of the IT people out there seem to think that this is degrading. These are the same types of IT people who make web sites that only display in one particular web browser, or require you to go find some funky external library, or insist that you recompile the application yourself without providing any documentation. These are the IT people that may be excellent at their jobs, but are hated by everyone that their job touches. You do not need to go this route. No one will criticize you or complain if you learn to effectively communicate with non-technical people. In fact, they will appreciate you even more. My experience has been that improved communications skills leads to better opportunities in life and in my career. If a manager is evaluating two candidates for a promotion, they are more likely to pick someone with less technical skills who communicates well than a more technical person who does not communicate well. Why? Because the person with good communication skills is able to show that they know what they are talking about, while the person without those skills simply cannot be understood.

      If you feel that your communication skills may be lacking, there are things that you can do to help them improve. One suggestion is to read more books and magazines. If you already ready books and magazines, escalate the difficulty level of your readings or try reading about topics that you are not familiar with. I have found that crossword puzzles are great tools to expand your vocabulary. Try your hand at writing something, whether it be short fiction, how-to articles, or poetry. If you can, try to go to new places or talk to different people; sometimes we find ourselves in cliques with a shared mindset that makes it difficult to learn how to communicate outside of that group. There are lots of different ways to improve communications skills, but at the end of the day, they all amount to “increase the frequency of your communications, the diversity of the mediums, and the people that you communicate with.”

      J.Ja

      • #3287413

        Geeks and Communication Skills

        by georgeou ·

        In reply to Geeks and Communication Skills

        Nice job!  This should be mandatory reading for all IT people!

      • #3104428

        Geeks and Communication Skills

        by Jay Garmon ·

        In reply to Geeks and Communication Skills

        I second George’s motion. (And since I have a small hand in promotions
        around here, I can actually do something about it.) Your “Secret
        Origin” is remarkably similar to my own, though I stuck with the
        communications angle rather than the path of technical competency. I
        guess that’s why I write trivia questions and you design software
        (lousy pay grade differential). Every job I’ve had, every major success
        I’ve enjoyed, every great D&D character I’ve played, each were made
        possible by two basic skills: critical analysis, and my ability to
        communicate. There is no professional (or, I dare say, human being) who
        could not benefit significantly from cultivating both. Who knows, it
        might just get you a gig writing a column enjoyed by dozens while
        posing for your author photo in a NextGen uniform.

      • #3104395

        Geeks and Communication Skills

        by sarathi ·

        In reply to Geeks and Communication Skills

        Eloquently put!

        I do what you are suggesting to improve communication ? I haven?t tried crossword puzzles though. But I read a lot of books ? fictions though but written by in different languages and translated into English ? these books always have some less frequently used words and you need to keep a dictionary handy while reading them!

        You hit the nail on the head when you talked about moving with the same set of people, I have first hand experience of my communication level going down the drain whenever I become lazy and stop reading magazines & writing and start spending lot of time with the poorly communicating peers! 🙂

      • #3104364

        Geeks and Communication Skills

        by wayne m. ·

        In reply to Geeks and Communication Skills

        Absolute Agreement

        Working with software developers, I am in absolute agreement.  Good technical skills help an individual do a good job; but good communications skills help everyone do a good job.  Some of my personal recommendations for imporved communications follow.

        1) Adopt the “Say it three times” pattern.  Use an introduction, body, and conclusion both verbally and in written communications.

        2) Purchase and read Strunk and White’s “The Elements of Style.”  This is a classic, it is short, and, even though it is a grammar book, it is an enjoyable read.

        3) Get some public speaking training.  There are dedicated training courses, college and community college courses, and the low cost option is ToastMasters.  ToastMasters clubs can be found at http://toastmasters.org/ , look for the “Find a Club” link.

        4) Write some papers meant to sway opinion, typically either purchase justifications or proposal and new business work.  I’ve been surprised at the number of technical people who, when asked to write a justification for something they want purchased, just go away and sulk.  Hey, I want to get you the tools you need, but I need some help to do it.

        None of the above recommendations is very costly nor time consuming.  The entire set is no more difficult than obtaining a new technical certification or learning a new programming language.  The advantage of imrpoved communication is that it opens a door to a wide range of interesting new career paths.  Try some of these ideas out and take your peers along for the ride.

         

      • #3285241

        Geeks and Communication Skills

        by tony hopkinson ·

        In reply to Geeks and Communication Skills

        I agree with what you are saying, though I personally don’t see the problem anywhere near as wide spread as some would have us believe.

        One tip I suggest, is it’s not enough to simply remain non technical. What you need is a meaningful analogy, I explained the programming concepts of scope, coupling and cohesiveness to a electrical engineers in terms of circuit design. I knew enough about that not to give them an illogical comparison.

        This is the same argument as talking to business people in business terms, ie communicating in terms they understand.

        The most disheartening aspect though is when you do this and they ignore you. Present them with a long term future and on going cost, vs a smaller short term bodge and they go for the latter every time. Even after you explain that this does not make the long term cost go away, in fact it increases it.

        So even though we’ve learnt to talk in their language they still aren’t listening. Push us on this front we have no argument except technical aspects, and so we confirm our stereotype.

        Now maybe I’m not business aware enough to realise why short term gains are preferable to long term success, may be one of these business types should explain it, then we can all row the boat in the same direction.

         

      • #3285186

        Geeks and Communication Skills

        by justin james ·

        In reply to Geeks and Communication Skills

        Tony –

        Your comments are always insightful and great to read, even when we disagree (although in this case we do agree). I do know the answer to the short sightedness issues, and I will be blogging about it shortly. You are right, this is a major problem, particularly with IT projects that tend to have a large initial investment. Stay tuned, and thanks for the great idea for my next post.

        J.Ja

      • #3271304

        Geeks and Communication Skills

        by vaspersthegrate ·

        In reply to Geeks and Communication Skills

        I began as an English/Creative Writing major in college, spent many years as an ad writer and direct marketing strategist, then expanded my skills by launching out into internet marketing, blogology, web usability, and ecommerce. So my primary expertise lies in communication and analyis of text and design.

        This background enables me to say, from my point of view, that it’s not always easy to describe technical issues and products to non-technical people, but I greatly enjoy the challenge and the reward. To see the light of understanding click on suddenly within a client’s eyes, or to see in a blog comment the vivid comprehension of a reader, is a joyful and fulfilling event.

        Technical documentation requires rigorous thought, careful observation, and detailed progression from step A to step B to step C, without ever assuming, “they’ll automatically do this” or “I’m sure they are already at this step (a few steps into the total process)” or “they won’t need me to mention this obviously mandatory activity”.

        To start at the real world Square One, as users behave without coaching, FAQ, tool tips, help desks, or site search reliances, is an exacting art.

        Highly technical persons should read “dumbed down” popular tech books, and extremely simplified online sources, like HTML Goodies, to at least get a feel for how a patient, super easy explanation can be presented. I like to use an esoteric speicalist term in my writings, immediately followed by a parenthetical definition, or a synonym string.

        In addition, it’s good for technical personnel (all of whom must necessarily explain things at some point, to someone) to read such authors as Hemingway, Kafka, Twain, Dickens, Proust, Joyce, Eliot, Steinbeck, Faulkner, and Poe to learn how to write clearly and with great impact. Poets may also be studied or reviewed to gain techniques for adding interesting, colorful, inventive expressions to a description, when such elaboration, analogy, and emotion could prevent the text from being dull, dry, and immemorable.

      • #3148686

        Geeks and Communication Skills

        by charlie.lacaze ·

        In reply to Geeks and Communication Skills

         If you want to get your point accross, you should try teaching as a profession. Try teaching non-nerds some simple software tasks and you’ll understand the level of nerd expertise they possess. Rule # 1 is: Understand that just because the upper level management doesn’t speak nerd, doesn’t mean that they aren’t proficient at what they do and you may be bidding for dollars that would rather be used in their realm of expertise. Rule # 2 is: Never assume that everyone understands the simple things. I’ve had to teach folks how to use a mouse before I could teach them basic software skills but I would trust them with my health care. (You guessed it…. Doctors and Nurses) 

      • #3148687

        Geeks and Communication Skills

        by charlie.lacaze ·

        In reply to Geeks and Communication Skills

         If you want to get your point accross, you should try teaching as a profession. Try teaching non-nerds some simple software tasks and you’ll understand the level of nerd expertise they possess. Rule # 1 is: Understand that just because the upper level management doesn’t speak nerd, doesn’t mean that they aren’t proficient at what they do and you may be bidding for dollars that would rather be used in their realm of expertise. Rule # 2 is: Never assume that everyone understands the simple things. I’ve had to teach folks how to use a mouse before I could teach them basic software skills but I would trust them with my health care. (You guessed it…. Doctors and Nurses) 

         

      • #3148647

        Geeks and Communication Skills

        by carter_k ·

        In reply to Geeks and Communication Skills

        If you can afford, or get your company to pay for, continuing education, I recommend a program like that at Mercer University: a fully-online master’s degree program in Technical Communication. See http://www.mercer.edu/mstco/ for details. I heartily agree that interpersonal communication is crucial to success at work, whether that means getting promoted or simply being understood by those you work with. It’s not enough to just be smart or a technical hot-shot. You need to be able to communicate the brilliant ideas you have.

      • #3148570

        Geeks and Communication Skills

        by mcphaim ·

        In reply to Geeks and Communication Skills

        Insert comment text here

        People interested in improving their communication skills might want to consider joing a Toastmasters International Club. 

      • #3148518

        Geeks and Communication Skills

        by mitchlr ·

        In reply to Geeks and Communication Skills

        Having been employed in IT for more than a decade, I have more than once found myself reduced to incredulity, not only owing to the seeming infacility of some of my colleagues with the English language, but also with their apparent blithe unawareness that communicating with others outside the tribal confines of the geek community may be a desirable goal. 

        One could wish that 133t hackers might breed with the English majors up in the communications department, who are as infacile with technology as the geeks are with language, and hope that hybrid vigor would produce progeny with the ability to make servers tap dance and write a clear elucidation of how and why it was done and how it benefits the company.  It is more likely, however, that such a pairing would likely reinforce the negative attributes rather than the positive with a resulting individual who could neither speak, read, write, spell, or remember his password.

        TBG58

      • #3148499

        Geeks and Communication Skills

        by dirtclod ·

        In reply to Geeks and Communication Skills

        Great Article!
        I question the integrity of individuals who supposedly grasp complex
        technological issues, yet claim an “inability” to master their native
        language. I believe it’s more accurately described as
        “Selective Application Laziness” and quite possibly moderate narcissism
        – either way, it’s by choice and it’s a cop-out. If someone can’t
        master simple
        communication, I wouldn’t hire them to walk my dog, much less run my
        servers. The guy can’t SPELL, but you’re trusting him to CODE or work
        in a BIOS environment?!  You mean to tell me that you cannot learn
        basic sentence structure but have the ability to design networks? 
        I’m sorry, but I’ve seen the output of the south end of a bull before.

        I once worked with scientific researchers (who unlike me) had 3 degrees
        in things
        like Quantum Physics, Nuclear Physics, etc. The brightest ones,
        who had the Nobel Prizes could EXPLAIN IN PLAIN ENGLISH what they
        were doing! The PRETENDERS couldn’t – they invented arcane slang
        and invented needless complexity in order to JUSTIFY THEIR
        EXISTENCE. Scientists who communicate get GRANTS.  IT people who
        want the boss to GRANT funding for projects might find the time to
        learn our language.  If not, that’s where I question these alleged
        “savants” true intelligence.  It’s that old axiom, “If you can’t
        dazzle em with brilliance, baffle em with b.s.”

        Now as far as being uneasy around attractive women, speaking in front
        of groups etc.? I have great empathy for these
        people, many of whom have been cruelly embarrassed, rejected or
        mistreated by some of these narcissistic pretenders – there’s usually a good, concrete reason. Social
        skills are not learned in a classroom – they’re gained by “hard knocks”
        and learned on the “fly.” At any rate, there’s a great degree of
        LUCK involved in whether or not someone masters advanced social skills.

        Any time you communicate – someone will not receive the information
        as you intended.  That’s no reason to give up – if you can’t make
        a mistake, you can’t make anything. 
        If companies continue to hire mumb-jumbo masters who intentionally
        invent complexity as some form of mental masturbation, then those
        companies will
        be:
        Up an unsanitary tributary with no feasible means of transportation!

      • #3148497

        Geeks and Communication Skills

        by duckboxxer ·

        In reply to Geeks and Communication Skills

        Great article.  I went to college at a small school, actually the one Carter mentioned above.  At that time, Computer Science was classified under liberal arts.  I had to take all the English and history classes that the philosphy kids did.  Turns out that was a good thing – I can communicate decently with the non-technical (ie: management and customers) people I have to deal with on a daily basis.  Also if one wants to move up the career food chain, communication is a key factor.  

      • #3148489

        Geeks and Communication Skills

        by duckboxxer ·

        In reply to Geeks and Communication Skills

        Great article.  I went to college at a small school, actually the one Carter mentioned above.  At that time, Computer Science was classified under liberal arts.  I had to take all the English and history classes that the philosphy kids did.  Turns out that was a good thing – I can communicate decently with the non-technical (ie: management and customers) people I have to deal with on a daily basis.  Also if one wants to move up the career food chain, communication is a key factor.  

      • #3149154

        Geeks and Communication Skills

        by robbi_ia ·

        In reply to Geeks and Communication Skills

        Very well written!

        I have more suggestions for learning to communicate.  Join a professional organization, and then get involved in the organization.  Volunteer to speak at meetings, or volunteer at the job to teach staff trainings.  And as J.Ja has already suggested, read, read, read!

      • #3149071

        Geeks and Communication Skills

        by james b. ·

        In reply to Geeks and Communication Skills

        I completely agree. I was a bit lazier and only got one BA though, but I think it was much more helpful than any BS I could have gotten. I am currently the only IT guy at a satellite office for my company. I have to manage desktop users all day long, and occasionally network and phone system issues. I got my job with a strong tech background, but absolutely no direct experience. I got my job because all of the other applicants had just graduated from one of those schools that guarantee an MCSA, and they had no social skills. I am still a complete geek on the inside, but at work, I keep is simple and easy to grasp for my users. I learned the specific tech skills I needed on the job. I think that is what the hiring managers here understood; you can teach someone tech skills pretty easily if they have the aptitude to learn them. If you have ‘aptitude’ for social grace, you would already have learned it. They realize they can’t teach that to you.

      • #3150361

        Geeks and Communication Skills

        by oneamazingwriter ·

        In reply to Geeks and Communication Skills

        Fantastic post. I felt badly that I didn’t read it until now, until I read the many comments. Now I’m glad I was late to arrive.The original post and comments will bring me back to read this again. Great stuff, J.Ja

      • #3157245

        Geeks and Communication Skills

        by santhanag ·

        In reply to Geeks and Communication Skills

        Check out this introduction article on Technical communication:
        http://www.articleworld.org/Technical_communication
        Content:
        1.Professions
        2.Formats
        3.Tools
        4.Resources

    • #3104307

      Why the Oracle Application Stack should not happen

      by justin james ·

      In reply to Critical Thinking

      Apparently, Oracle is seriously considering putting together its own full application stack. There seems to be a lot of debate about this, both positive and negative. Personally, I think I will have to side with the naysayers on this one.

      First of all, the last thing the world needs is more confusion amongst the Linux distributions. I understand that Oracle is looking to purchase an existing distribution, not start their own. But do open source projects really manage well after being purchased by a large corporation? SuSE seems to be a bit unhealthy after being bought by Novell. Granted, Novell has “Fido’s magic touch” where everything they touch turns to dog doo. WordPerfect. Corel. Their own products. And so on. Much of the open source community seems to be “personality driven.” The movement of one or two key contributors to a project can cripple it. Not just because they were cornerstones of development, but if they leave, so do many other people. Forking is another common problem after an open source project gets purchased. This is actually the main reasons why I use BSD instead of Linux; I feel that there is too much churn in the Linux community. If  I were a Linux user, my biggest fear would be “what if Linus Torvalds gets hit by a bus?” I can imagine the power scramble if that were to happen, and it scares me. So the idea of Oracle purchasing or starting a Linux distribution of their own would worry me, particularly if they were to purchase one.

      Another issue with the idea of Oracle having a stack, is that I am not sure if Oracle is a company I would want to have to deal with. Their website is a frightening place indeed; finding useful information in a usable format can frequently take hours. Just try to find the SELECT syntax on their website, I dare you. I do not think that Oracle understands how to interface with customers on that type of basic level well enough to want to have my OS coming from them.

      Another arena where Oracle falls woefully short already is in their management tools. Everything about installing, configuring, and maintaining Oracle’s database products is pure misery. Everything is done wrong, as far as I am concerned. They have visual tools such as Oracle Enterprise Manager that simply do not work right. For example, if you put the cursor into a field and start typing, the first character is usually dropped. The interfaces on their visual tools stinks as well. Oracle Enterprise Manager has “features” on the menu that when selected, tell you to use “Oracle Enterprise Manager Console.” Isn’t that the tool I am currently in? Even the GUI version of SQL Plus is a dog; it has a maximum line length, forcing me to wrap lines by hand, and does not even do my the courtesy of putting a vertical bare showing me where to wrap them. Oracle products do not install correctly; the DLLs needed for ASP and ASP.Net connectivity have the file permissions set incorrectly, a problem that has persisted through a number of major revisions. Even trying to get clients talking to Oracle is a pain. Oracle needs multiple hundreds of megabytes worth of garbage to have a desktop talking to an Oracle database, whereas MySQL and Microsoft SQL Server just need a tiny ODBC (or JDBC, or whatever the right method is for your purpose) driver. Overall, the last thing I want is for my OS to be delivered by a company with this mentality.

      Oracle is also very, very bad about delivering patches, and they do not seem to have a handle on security. In terms of timely patch releases, they make Microsoft look like perfection. Oracle has a bad habit of outright ignoring critical security flaws for months or years at a time, even after being told about them. Their patch cycle seems to be quarterly; meanwhile, Microsoft gets criticized for monthly patches. Oracle also does not seem to understand automatic patching. Again, these are traits I simply do not want in the source of my OS.

      As it stands now, a good portion of Oracle’s stack is not even their own software. Oracle Application Server appears to be a hacked up version of Apache. They do not have any languages of their own (or even re-packaged/re-branded) outside of PL/SQL which you will not be writing applications in. Right now, the only part of the LAMP stack that Oracle can play a role in is the M. You can have a LAOP stack if you want. Oracle is considering grabbing the L. They are still missing the A and P. Indeed, when one looks at what Oracle does well (high performance, scalable database server), I would much rather prefer that Oracle go after the A and not the L! Web servers are more closely related to databases, in terms of how they get written. You simply do not try to start from the middle of the stack and work your way out like Oracle is considering. You need to start from one end or the other and go across. Red Hat had the L, they bought JBoss to get the A. They still need M and P. Oracle is trying to start at M and then go for the L. This just does not work. This is not a stack, this is patchwork insanity.

      The idea behind a stack is that you have a group of tools that sit upon each other and play nicely with each other. Patchwork stacks just don’t cut it. Components within a stack are often not best of breed by themselves, but the combination works great. Look at LAMP: the P is not so great (Perl is poor for web development, PHP is wretched in general, Python just isn’t very popular), and MySQL is not quite top-tier yet (although it is still great). But LAMP works great. Oracle does not know how to make their software play nice with other software. The idea of them trying to build a stack is laughable, at best.

      J.Ja

      • #3285313

        Why the Oracle Application Stack should not happen

        by georgeou ·

        In reply to Why the Oracle Application Stack should not happen

        “Oracle is also very, very bad about delivering patches, and they do not seem to have a handle on security. In terms of timely patch releases, they make Microsoft look like perfection. Oracle has a bad habit of outright ignoring critical security flaws for months or years at a time, even after being told about them. Their patch cycle seems to be quarterly; meanwhile, Microsoft gets criticized for monthly patches.”

        Where’s Apotheon on this :)?

      • #3150170

        Why the Oracle Application Stack should not happen

        by ms_lover_hater ·

        In reply to Why the Oracle Application Stack should not happen

        J.Lo,

        Oh, Jeez, sorry that another company wants to get you out of that comfy Microsoft world… I don’t have time to respond to all of your lies (do you work for M$???), but this one truly shows your ignorance:

        “Their website is a frightening place indeed; finding useful information in a usable format can frequently take hours.”

        What, you still on 56K? You don’t use web browsers? 5 minutes tops to find anything there, sorry that Oracle wants to give you best of breed, where as Microsoft has been inbred for quite a while now, and it shows. Instead of blathering on for paragraphs about how bad Oracle is, spend some time learning how to navigate a web site. Say hi to Billy Boy Gates the next time you see him.

      • #3150050

        Why the Oracle Application Stack should not happen

        by ljhunt ·

        In reply to Why the Oracle Application Stack should not happen

        Are you kidding — You can’t smell the burn MS will need to go through to become ‘COMPETITIVE’.

        Oracle has the power, money and know how to bring Linux to the market place, in direct competition with any MS product and all /users /admins will benefit. Either MS will adapt and overcome or be extinct (Musuem of Antiquities — MS Wing). That simple.

        As far as tech – ‘any’ sysadmin willing to properly configured Linux server will find out it is superior to any windows server performing the same task with the same hardware and from cold metal to up and running in half the time and dodge the once a week MS security patch for some items over 2 years old (yes years) that was only recently addressed.

        As far as taking on stack comparsion to LAMP; realize MS has a fear of being ran out of the market on a raise floor and if Oracle doesn’t address it soon their business will be threatened MySQL has grown by leaps and bounds and by a verion 6 may be more than just a serious threat to Oracle’s and MS business. Oracle needs 1 year of integration with Apache and PHP (or Perl) with a tuned stable version of Linux is Oracle’s attempt to stave off premature extinction. Open source is here and with several other advances will in PC hardware and cost reduction the UNIX servers farms of the past will be reborn in Linux (PC based) in Bewolf configurations. Remeber no matter how good we are the bean counts run the business and cheaper rules the business, with faster taking a close second.

      • #3162384

        Why the Oracle Application Stack should not happen

        by havacigar ·

        In reply to Why the Oracle Application Stack should not happen

        I have just two things to say, OCS and OCFO.

        They should have named them crap and worse crap.

        I rest my case.

    • #3285244

      eWeek’s Interview with James Gosling

      by justin james ·

      In reply to Critical Thinking

      eWeek recently published a very interesting interview with James Gosling, the father of Java. Mr. Gosling’s candor and honesty are to be admired, and it seems that he and I are on the same wavelength regarding many topics:

      AJAX

      “Creating [AJAX components] is extremely hard. Not because programming JavaScript is hard, but because all these flavors of JavaScript are ever so slightly different. You have to build your components so that they’re adaptable to all the different browsers that you care about. And you have to figure out how to test them. None of the browsers has decent debugging hooks.”

      I have been saying this for some time as well. The discrepencies between various browsers’ implementation of JavaScript make it difficult at best to write a fully cross-platform AJAX application of any complexity. Furthermore, the browsers themselves simply do not offer an environment to debug in; developers are stuck with write() and alert() to show the value of variables throughout the code execution. It is like working with BASIC code circa 1986.

      “There’s no ability to do cross-platform QA; you’ve just got to do them one by one. Right now it looks pretty hopeless to make AJAX development easier.”

      This is so sad, yet so true. I simply fail to see how anyone can realistically push AJAX as a platform for applications with the same level of functionality as desktop applications under these conditions.

      Regarding Sun’s business mistakes

      “There are so many to choose form. And sometimes it’s hard to say what’s a blunder and what’s just the case of the world being weird.”

      All I can say to this is “WOW!” Can you imagine Bill Gates or Steve Ballmer or Michael Dell or Steve Jobs or Larry Ellison saying something like this? Neither can I. Granted, Gosling is an engineer, not a business person. But it is this type of attitude that has hampered Sun so badly over the years. The fact is, with business sense like this, Sun’s very existence is testimony to the quality of its products. Sun has indeed made more blunders than just about any major tech company out there, except for maybe Novell and Borland. Like Novell and Borland used to be, Sun is run by engineers. Their products are amazingly good most of the time, but they often simply have no good fit into the realities of the business world, and the rest of the company just does not know how to get paid for those products. Solaris is regarding by many, if not most knowledgeable people as the best UNIX out there, and certainly better than Linux. Yet Sun cannot manage to give it away! It is because Sun waited way too long to try to go open source with it. First they attempted to embrace Linux, then they open source Solaris. Sun changes its motto every year it seems like, that just shows how confused and directionless they are.

      Overall, I like Sun. I think Solaris is a good UNIX, from what I know and have seen of it. Java, while being a dog in reality, is an innovative idea and did a lot to break Web development out of the stagnation of CGI. If the VMs were not so wretched, I would see it as a great competitor to .Net on the desktop. It is just a real shame that no one at Sun understand business.

      J.Ja

    • #3148976

      IT Projects Up Against “Penny Wise, Pound Foolish”?

      by justin james ·

      In reply to Critical Thinking

      In response to a previous article (Geeks and Communications Skills), commenter Tony Hopkinson wrote:

      Now maybe I’m not business aware enough to realise why short term gains are preferable to long term success, may be one of these business types should explain it, then we can all row the boat in the same direction.”

      DirtClod also points out the connection between IT projects and research grants:

      Scientists who communicate get GRANTS.  IT people who want the boss to GRANT funding for projects might find the time to learn our language.”

      The two ideas are not unrelated. Tony?s complaint is a common frustration in IT. Even the best proposal in the world that shows great ROI numbers, productivity gains, reduced downtime, and all of the other things that a good IT proposal should show can be turned down. One would think that “the suits” of all people would jump on a chance to increase profits after overcoming a substantial upfront cost. After all, this is why companies build factories, invest in training, outsource workers, and so forth.

      Unfortunately, increased profit is not the actual goal of many companies, particularly publicly traded companies. The actual goal of these companies is “to increase shareholder value.” “Shareholder value” directly translates to “stock price.” Look at the compensation plans for C-level executives (CEO, COO, CIO, etc.). Their bonuses are tied more directly to stock price than to profit, market share, revenue growth, or any other direct financial metric. In addition, a significant portion of C-level compensation is in the form of stock options. Additionally, the management of a publicly traded company has what is called “a fiduciary responsibility to the shareholders.” That means that they are held accountable to the shareholders, not to the employees or customers. To put it in a more obvious way, “it is in management?s best interests to increase the stock price of a company at the expense of any other metric.”

      The end result is that a company acts in whatever way will be best for the stock price, which is not always best for the company. To make the situation worse, a C-level executive typically does not stay with a company for more than a few years. They have little incentive to worry about the long term health (or even the long term stock price) of a company. Shareholders now rarely hang onto a stock for the long term; they are looking to “buy low and sell high,” which is hardly a recipe for having shareholders that care about the viability of a company?s business model or the sense of their business practices.

      It is true that things like profit and loss, revenue growth, market share, and so on and so on play a role in the stock price of a company. This information gets released to the public once a financial quarter. IT projects, sadly, tend to be capital expensive. The inefficiencies that IT projects resolve are not line items on any spreadsheet though. Let me give an example. At one company I worked for, they computer that I was assigned was a Pentium 1 with 128 MB of RAM, running Windows 95. This computer was expected to be running the following applications throughout the shift: Microsoft Excel, Microsoft Outlook, Microsoft Word, Internet Explorer (two windows minimum, one of which contained a Java applet), a custom built Java application (using a non-standard Java VM, so I would have two Java VMs open at any given time), McAfee Anti-Virus, and a telnet client that was extremely heavy (it had all types of scripting, macros, etc. built in). Microsoft Word would be open about half of the time, as well as a few other applications. By the way, this was in the year 2003. To the best of my knowledge, that PC is still in use. This setup was so unstable, I had to start the applications in a particular order or face system lockup. Everyone had to come onto the shift 15 minutes early as well. So the company was paying 15 minutes of overtime per day, which comes to 65 hours of overtime a year. Even at our pittance of a wage, this inefficiency would have paid for replacement PCs within a year.

      Why were those PCs never replaced? For the same reason that many IT projects never occur: the short term budgetary impact would have affected the stock price more than the efficiency gains. Inefficiencies simply do not show up in reporting figures. There is no line item in a quarterly report that says, “excessive staffing due to a process not being computerized,” or “additional payroll because computers are slow,” or “missed SLAs due to poorly trained users.” In comparison, “three month project to automate a process,” “replacement and upgrade of existing PCs,” and “training classes for users,” are all line items that Wall Street (or London, or Tokyo, or wherever) sees. It is sad, but it is true. This is why a great project can get shot down. The initial upfront cost is just too high and the ROI just does not come fast enough. I am not going to pretend to know what numbers “the suits” have in mind when evaluating an IT project, but off the cuff it seems to me that you need to deliver a 25% savings within a quarter of project completion, and 150% within one year of project completion on a one quarter long project to get approval.

      This mentality hits IT everywhere you look. Projects get rolled out as “betas” that never seem to get finished because of it. Programs get written in “quick and dirty” languages like .Net that are just not appropriate for certain types of projects. In reality, if you want to write a highly scalable Web application, writing it in C++ in a CGI environment is your best bet. But now one will ever get approval for that project, because good C++ coders are expensive, and the project will take forever because you would need to re-write much of what JSP, PHP, and ASP already handle.

      The only way to get the funding you need for your project is to do exactly what DirtClod says to do: learn management?s language. If you cannot show in words that management understands, in a format they understand (“Gentlemen, start your copies of PowerPoint!”) that your project will more than pay for itself quickly enough to be palatable to the stockholders, it simply will not fly.

      Do not think that this does not affect private companies, either. It does. Private companies with a small amount of ownership (a few partners) have a tendency for the owners to treat every dollar in the company as their own, which is understandable. The $10,000 you want to spend on servers is viewed by the boss as half a year?s tuition for his child?s college. Many, if not most privately held businesses think ahead to selling out or going public. A history of cost management helps them get the most return on their initial investment of time and capital. No matter how you cut it, IT projects that cannot be shown to have a big, quick ROI kicker are just not going to be approved, regardless of how well you write the proposal. I will discuss these kinds of “low hanging fruit” projects soon, stay tuned.

      J.Ja

      • #3150446

        IT Projects Up Against

        by wayne m. ·

        In reply to IT Projects Up Against “Penny Wise, Pound Foolish”?

        Unfortunately, the focus on short-term thinking is not new nor unique to IT.  Dr. W. Edwards Deming (“Out of the Crisis”, “The New Economics”) was writing about those concerns 25 years ago.  In fact some of the phrasing in the blog above sounds very similar to Dr. Deming.  I would highly recommend reading his books to anyone (I believe “Out of the Crisis” is still in print).

        One warning.  Dr. Deming was a very insightful man, but not a very captivating writer.  Nonetheless, he is one of the few writers that I have reread multiple times and found value in each rereading.

         

      • #3150403

        IT Projects Up Against

        by justin james ·

        In reply to IT Projects Up Against “Penny Wise, Pound Foolish”?

        Wayne –

        Thanks for the heads up on those books. I will try to get a hold of them, I love reading about economics and business (thinking about getting a Masters in it). I have actually heard of The New Economy but not the onther one. They sound more interesting than my current reading list anyways. 🙂

        J.Ja

      • #3150070

        IT Projects Up Against

        by tony hopkinson ·

        In reply to IT Projects Up Against “Penny Wise, Pound Foolish”?

        I’m a tech head, a programmer, a rational mind. I’m well aware that there are reasons for the decision, I’m even clever enough to deduce a few possibilities despite my appalling lack of business skills.

        All I want is a clue, along with the NO !

        Not a lot to ask, otherwise few possibilities exist.

        I’m crap at communicating and ‘you’ don’t want me to improve.

        I’m good at communicating but ‘you’ don’t agree with my numbers.

        I’m good at communicatng and ‘you’ are bloody horrible at it.

        I’m going to be sacked next week, so there’s is no point in starting.

        The firm is going bust next week, so there’s no point in starting.

        ‘You’ are going to pass the idea to your nephew, the IT guru who starts next week.

        ‘You’ were unable to repay the unofficial loan from company accounts.

        ‘You’ spent the entire budget on gee whizzery.

        ‘You’  are leaving next week, so you you aren’t signing that big a cost in case your boss takes it out of your golden handshake.

        A manager who doesn’t communicate is less use than a tech who can’t or won’t. If the perception is that you don’t communicate, then you don’t communicate.

         

        Low hanging fruit, my favourite !

        Eat them all real quick, get too fat to climb for more, exhaust yourself building a ladder, call in a mate for a share of the viands, watch him chop the bloody tree down.

        Sore point J, been there done that, have the compost.

         

    • #3150274

      Podcast Only? No Thanks!

      by justin james ·

      In reply to Critical Thinking

      [5/2/2006] Edited to clarify for those who may have missed the original discussion. I am not against podcasts. I am against information being distributed as a podcast but not available in any other format. There is nothing inherently wrong with podcasts as long as that information is available in a text format in addition to the podcast, or if the podcast provides a unique value proposition.

      This blog is written in response to the comment thread for Don’t be lazy: Communicate with your end users.

      I am with Palmetto on this one. I refuse to use podcasts (along with RSS and many other “Web 2.0” hocus pocus). I can read approximately 10 times faster than anyone (save the guy who did the Micro Machines ads) can talk. I can read the script for a 90 minute movie in about 10 minutes, typically. I can read a 1,000 page book in about 10 hours. Listening is the least efficient form of knowledge transfer for me. Reading is the most efficient. Like many others, I can read an article at my desk, even if on the phone or waiting for another process to finish. If I am interrupted, I just remember where I was, re-read the preceeding paragraph or sentence, and continue, as opposed to having to shuttle back and forth to jog my memory.

      Furthermore, this is the Internet, not the radio. Text is the lowest common denominator. Satisfy the LCD first, and then worry about the high-end users or special need users. This is Reason #4,562 why I am against AJAX; it fails on the LCD test. If your information cannot be used within Lynx, throw it out, it is worthless. Even someone who uses a screen reader (such as the blind or vision impaired) is equally well served by a text document as they are a podcast. A deaf person cannot use a podcast at all, whereas they can read text. Do you mean to not have your message be accessed by the deaf? If I owned a store that a handicapped person could not enter for whatever reason, I would be in deep trouble as per Federal law. Your online business should operate the same way. No excuses. Call me silly, call me crazy, but I do not turn away business because I wanted a fancy widget that deprived 5% of my potential customers of the chance to give me their money.

      The only time a podcast (or any other audio-only presentation of information) adds any value is when the voice itself provides information that cannot be easily or adequately transcribed (vocal inflection, information about music with sample clips, sarcastic remarks, etc.). I do not have an iPod or similar device, and if I did I might not have a way of hooking it up to my car stereo, where I listen to most of my audio stuff. What do you propose I do? Burn the podcast to CD to listen in my car? I will not do that, and neither will anyone else most likely.

      In your article itself, you state:

      “In this 5-minute podcast, I explain why there is no substitute for good communication and offer a little advice for using three common communication methods: e-mail, voice mail, and face-to-face contacts.”

      Offering a podcast without a transcript (or at least a summary of the salient points) is pretty lazy. I have a problem with my hands, wrists, and eyes. Even at my age, typing, especially at the end of the day, is extremely painful to me physically. Yet I do it anyways, because that is the best way to communicate on the Web, and I communicate via the Web. I would love to simply be a lecturer or a radio or TV personality, but I am not. I put up with the pain in order to provide the best possible service to my readers.

      Remember, we (the audience) are the customer. If we are unable to use or consume your product, no matter how good it may be, you will not be able to sell it. Period. This is why I spend so many bytes in my blogs discussing basic usability. Poor user interfaces equal low market share, regardless of how good the product is (look at Linux in the desktop market). Of course, great user interfaces do not result in great market share (Macs in the desktop market), but a poor interface will always lose to a better interface, unless the product is so good that its value proposition after the hit from usability still makes it substantially more valuable than the more easily used product. It is that simple.

      J.Ja

      • #3150127
        Avatar photo

        Podcast Only? No Thanks!

        by Bill Detwiler ·

        In reply to Podcast Only? No Thanks!

        TechRepublic is about content choice (not exclusion)

        During my almost 6 years with TechRepublic I’ve learned that our members (our customers) want content delivered in multiple formats. Some members prefer to read articles online, others want to download our PDF documents, and many prefer our newsletters. Taking a more active role, many members create content by participating in our Discussion forums and Technical Q&A. TechRepublic has always and continues to provide choices.

        We offer online articles on Windows hacks, policy and form downloads, interactive discussions on IT management and all matters of geekology. When possible and appropriate, we even offer the same content in multiple formats–check out the download Security through visibility: Revealing the secrets of open source security and the article version. My podcast 10+ tools every support tech should have in their repair kit is also available as a PDF download. We know our members pick and choose the content formats they find most helpful–we encourage that choice.

        As the Internet evolves and content formats expand, TechRepublic will strive to offer content in those new formats. Text is still the dominant Internet format, but high-speed connections are increasing the prevalence and user desire for new formats, such as RSS feeds, podcasts, photo galleries, and video. In a July 2005 News.com article, the Diffusion Group predicted that the U.S. podcast audience will climb to 56 million by 2010 and that three-quarters of all people who own portable digital music players will listen to podcasts. As an online media company, CNET and TechRepublic cannot and should not ignore this trend. Does this mean we will abandon our text-based content? Of course not. We will continue to offer content formats that our members tell us they want.

        As someone who wants to produce great content that our members find helpful, I welcome content suggestions and constructive criticism. I may not always agree, but I will always listen and act when appropriate. I take offense however, to J.Ja’s statement that by offering an additional content format I am making TechRepublic more difficult to use for our members with disabilities. Now that we offer streaming and downloadable podcasts, those with impaired vision can choose to listen to our content or read it with a screen reader. Those with hearing impairments can still access our online and downloadable text-based content. Again, we often offer content in multiple formats and allow the customer to choose.

      • #3148182

        Podcast Only? No Thanks!

        by mwaser ·

        In reply to Podcast Only? No Thanks!

        Transcripts of podcasts would make TechRepublic much more valuable to me.  I don’t know why you don’t do it ALL the time.

      • #3148177

        Podcast Only? No Thanks!

        by duckboxxer ·

        In reply to Podcast Only? No Thanks!

        You sound as though you are completely against podcasts all together.  I truly hope not.  I have actually very recently gotten hooked on these.  I download (via NewsGator and FeedStation) and pop them on my PDA to listen too.  I’m a news junkie and generally nerdy; hence I’m usually listening to some news commentary or other academic podcast rather than some random joe’s blog.  What I also do, is listen to things that will  help me in my job, like development or management podcasts from key industry players (PMI, eWeek, Helms & Peters, etc.).  Being a multitasker, this allows me to help out my skillset and still be working.  

        Having been at an advertising agency, I have to agree with Bill.  A company has to provide what their customers (audience) wants.  And if their competitor provides this service, then you will lose those customers.  True a company has to weigh the potential ROI for adding a new services like podcasting.  Yes, for ADA compliance you ought to provide various forms of information, but honestly this provides more choices for customers.  And currently sites like TechRepublic are already doing that from articles to white papers to newsletters.  They realize that different customers need information in different formats, from printable to mobile (and I classify a podcast as mobile).  Just because you don’t like information in this format, doesn’t mean others don’t.  🙂

      • #3163495

        Podcast Only? No Thanks!

        by justin james ·

        In reply to Podcast Only? No Thanks!

        I am not against podcasts. I am against information being distributed as a podcast but not available in any other format. There is nothing inherently wrong with podcasts as long as that information is available in a text format in addition to the podcast, or if the podcast provides a unique value proposition. I did not make this as clear as I should have, as I was originally writing this as a comment to a thread, and I apologize for any confusion or miscommunication.

        J.Ja

      • #3161939

        Podcast Only? No Thanks!

        by andy goss ·

        In reply to Podcast Only? No Thanks!

        I have to agree with Justin James. I am not going to listen to a five
        minute podcast just to find out if I want to listen to it. I can
        eyeball the text of it in seconds to discover if I need to read it, I
        can backtrack and cast forward at will, I can Google on bits I want
        background on, I can cut and paste information I may need again. I can
        ponder on a phrase that is in front of me for as long as I like. Can I
        do any of that with a podcast? With text I do not have to cope with
        regional accents, poor diction, or the irritating verbal mannerisms
        that few people commit to writing but allow into their speech.
        I am sure the podcast has a place in the world, I just can’t think of
        one offhand, unless as a form of light entertainment. Like “push”
        technology, the portal concept, internet fridges, and talking doors
        (“Glad to be of service!”), the podcast is an overrated gimmick, unless
        someone finds a real purpose for it. Then I may use it. Until then,
        like the pointless video clips on news sites, I shall ignore them.

        Andy Goss

      • #3161581

        Podcast Only? No Thanks!

        by aaron a baker ·

        In reply to Podcast Only? No Thanks!

        Same Here;

        I don’t use Podcasts,RSS feeds, etc “RSS Feeds, now there’s something totally useless and all the “Other” forms of  communications that we seem to be killing ourselves,trying to achieve. I for one, like to “read” a story, it I find it interesting, my preferred method for download is Pdf.

        I can understand that podcast “May serve a purpose, but I question as to whether or not we are really just playing with new toys here. It seems that the minute something new comes out i.e.Podcasts, you’re left with the impression that all things before it are now ancient history. I’m sorry but I beg to differ.

        I much prefer reading the article, I find it far more relaxing than watching someone trying to “sell me” an idea on a podcast “Naturally, they get all excited an besides themselves and for the most part, get caught up in heat of the moment. All this does is annoy me. It’s nowhere near as relaxing as reading them downloading what you like. So by all means, continue with the podcasts, and the “Blind of Vision routines etc, just make sure that the equivalent is available in the normal way. As for RSS Feeds, well I won’t go there. But I will say this.                                                                         One of my favorite parts is the one where in the RRS Feed menu, there is an area that asks if you want to view this “in your browser”. In your Browser, no less, right back where I started from, I laughed out loud and shut her down. Then wrote a blog about it. Check it out. 😉

        Regards

        Aaron

    • #3148309

      Open Source Does Not Make Better Code. Better Programmers Make Better Code.

      by justin james ·

      In reply to Critical Thinking

      Every now and then, I will see a “dispelling the myths of open source” type of article, blog, discussion, or whatever come my way, and it always seems to come around to the “more eyeballs means less defects” idea. For whatever reason, many open source proponents seem to believe that there is this rear guard of closed source folks spreading FUD about open source (even Microsoft has toned down their rhetoric lately). I think that less than 10% of the knowledgeable people out there actually claim that closed source is inherently more secure than open source. It definitely seems that most people (at least amongst those that voice an opinion) believe that open source software is inherently more secure than close source software.

      In reality, what matters much more than “open” or “closed” source, is who is writing and reviewing the code, why they are doing it, and how long they are doing it. If I compare OSS project “Project A” to closed source project “Project B”, and “Project A” is being written by five 15 year olds who just wrote “Hello World” last week for the first time, and “Project B” is being written by twenty crusty old timers, and “Project A” has a three person “community” and “Project B” has zero community inspecting the source, I still guarantee that “Project B” will blow “Project A” out of the water. Open source, closed source, it really does not matter.

      Another thing that I find fallacious about this argument is the continual assumption that “open source” means “free.” The two are not mutual ideas, not by a long shot. Nor are they mutually exclusive. Historically speaking, UNIX is “open source,” but hardly “free.” Indeed, the original BSD386 project was due to the desire for there to be a free UNIX. One of the reasons why so much System V source code has ended up in various UNIXs over the years is precisely because Sytem V was open source, and SCO?s lawsuits exist because System V was not free! On the other hand, there is plenty of free software that is not open source. Just got to any shareware repository and find a piece of freeware that is pre-compiled and that does not include source code.

      What really matters is who is writing and reviewing the code, and money tends to attract the continued writing and review of code much better than whatever it is that actually motivates FOSS coders. Sure, some FOSS projects (Apache, Linux, BSD, MySQL, etc.) attract top talent, but just taking a look at Source Forge shows that the vast majority of Open Source projects go nowhere. To decide that FOSS is the best possible method of development and quality control based on Windows vs. Linux or Oracle vs. MySQL or IIS vs. Apache or PHP vs. Java, or whatever is silly. That is like saying that “a Dodge will always be faster than a Chevy” based upon a comparison between the Viper and the Corvette.

      One of the reasons why these projects are able to attract such a large pool of developers and testers has less to do with the fact that they are “open source,” but the fact that they are free. Only a minute percentage of Linux users ever touch their source code, let alone look at it (or even care about it). They are attracted by its phenomenal price/performance ratio. The same can be said for any FOSS project. Thanks to the widespread usage of various package managers, it is fairly uncommon for most mainstream Linux users to even compile from source, let alone modify compiler flags or make changes to source code. If these packages were closed source but still free, most of their users would still use them and be testing them.

      The vast majority of lines of code are written under the radar of most people and do not get any attention. Try comparing a small sample of each type of software. Take a few dozen random items from Source Forge that are in at least a “release quality” state and compare them to a few dozen freeware applications. Then evaluate the difference between closed source and open source. I really cannot tell you what the results will be, but I do know that many of the open source pieces of code that I have used, outside of the “big stuff” (various UNIXs, Apache, MySQL, PostgreSQL) are not so great. For a project that lacks glamour, it is hard to attract someone to spend a large amount of time seriously working on it. It is that simple.

      Mind you, I am not against open source or free software whatsoever! I use it all of the time in my day-to-day life, especially FreeBSD, Apache, MySQL, and Perl. But I also use a lot of paid and/or closed source software as well, like Windows, IIS, Oracle, Microsoft Office and so on. Some of my favorite pieces of software are simple shareware applications: Notetab Pro and ThumbsPlus immediately come to mind. It is very rare that I have ever wanted or needed to “look under the hood” of a piece of software. Tomcat/Jakarta required me to do so to find out why it was not behaving as documented. Indeed, most of the time that I have had to look at source code, it was to compensate for poor documentation, not to actually make any change or satisfy a curious itch. I was grateful to be able to inspect the source code, but I would have preferred better documentation instead.

      Many of the arguments that I hear in favor of open source compare Windows to Linux, or Internet Explorer to Firefox. Windows vs. Linux just shows that the Linux coders are better, smarter, and more well organized than Microsoft’s Windows coders. Microsoft has had something like ten years to get Internet Explorer right, and they still have not managed to get it nailed down. This is not news. The fact is, closed source shops consistently crank out many products better than Microsoft?s too. One can just as easily compare OS/2 Warp or BeOS or MacOSX to the version of Windows from the relevant timeframe, and Windows still falls short on many (if not most) benchmarks like security, stability, usability, and so on. I note that it is rare for someone to compare MySQL or PostgreSQL to Oracle or Microsoft SQL Server in an “open source versus closed source” debate. Why do we never hear “.Net versus Mono?” Even comparing Apache to IIS is difficult, because IIS is a significantly more ambitious piece of software than Apache.

      Open source, in and of itself does not produce better code. Better coders and better testing produce better code. It is that simple. When a closed source shop has the better coders and better testing, they write the better software. When an open source project has better coders and better testing, they write better software. To think that just because a piece of code can be modified or inspected by anyone and everyone means that the best coders and testers will be modifying and testing that code is just not correct.

      J.Ja

      • #3148234

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by georgeou ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        I think Vernon Schryver who is the creator of DCC (Distributed Clearing House) said it best in this post.  It permanently dispels the myths of “many eyes”.

        In addition to this post, there was an Open Source “many eyes” project that called for volunteers to do the dirty work of auditing Open Source code.  It didn’t take long to die because there were almost no volunteers.  The truth of the matter is, no one will do the dirty work of code auditing unless there’s a salary involved.  Most people (the .01% that can even write any kind of source code) just want to do the “fun” stuff.  Code auditing is time consuming and boring.

      • #3148184

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by tony hopkinson ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        I got to the end of your post , my first response was ‘and your point is ?’

        Well I nearly agree with you. Certainly I’d expect three experienced developers to come up with a better product than five fifteen year olds. In fact I’d expect them to come up with a better application than five top class honours graduates.

        Obviously if no one feels there is a use for an open source project there isn’t going to be a decent enough community to take advantage of the benefits of peer review.

        A successful open source (nothing to do with free as in money) will always out perform a successful closed one in terms of security, stability and code quality. For a very simple reason, it will have to be written so many developers of varying standards of skill can understand it enough to contribute. You can create cases where closed out performs open, weight the odds in favour of whatever you like, but the biggest indicator of code quality by whatever measure is readability.

        The latter is not a requirement in closed source shops, it’s not a requirement in academia unfortunately either. Equally those who do contribute the open source projects have a vested interest in it being the best it can be, because they want to use it, not the most profitable it can because they want to sell it. There is a difference and it’s a massive one.

        I’m not allowed to do the best work I can, I’m not allowed to do things properly, I’m not allowed to choose a technical positive over a cost neagtive. That’s business, it makes perfect sense to me, so why are you comparing apples and oranges in the first place ?

         

         

         

      • #3163390

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by mindilator9 ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        I see what you’re trying to say, J, but it seems to me that having competent developers is requisite to the issue. You can switch the players on both sides and get inverted results, meaning you’ve proven nothing. It matters not even what field we’re talking about: take any industry, grab 5 inexperienced kids vs 2 masters of their craft, and you get what you would expect. Let’s make the question relevant by starting off with the assumption that we’re dealing with expert programmers who know better than to make the same basic mistakes we see constantly on the web everyday. Of course there are inept programmers on OSS projects, just as there are in closed source, so that’s moot. Let’s just look at the 2 classifications of software technology assuming the best programmers of both sides of the fence are what we’re using to gauge the quality by. Then and only then should we compare the best attributes of OSS and closed source, because now we’ve got an even playing field, the experiment’s results aren’t already tilted to one side. Another way to look at it is to compare a race between that Viper and Corvette, but instead of putting a 15 year old in the driver seat of the Viper, make them both pro drivers. Between a Viper driven by a 15 year old and a Corvette driven by Jeff Gordon, my money’s on the Corvette…no brainer. If you want to know the real comparison, you’d have to race Jeff against himself or another equal. Let’s race OSS and closed source, using professional programmers as the natural assumption for implementation, and see who wins. Human competence is an issue all its own outside of how well two coding archetypes compete, and is in no way limited to even the IT field. I’ve got 2 excellent programmers and they each use a different type, OSS and closed (M$). What they deliver me is, for me, the real test. If one of them wasn’t excellent, I would not deign to assume one type’s performance over the other. That leaves me with one last remark…we’ve got to clarify this question even further and differentiate Microsoft from other closed source definitions. No other closed source organization comes close to the magnitude and scale of Microsoft, the products they produce, and the huge tangled ball of yarn called their code base. Microsoft aside, what OSS vs closed source projects have features which outperform the alternative? We can compare OSS with Microsoft, and OSS with other closed source, but taking all closed source together only convolutes the experiment because it is safe to assume that few if any other closed source organizations have anywhere near the scale of problems Microsoft does. (Maybe cuz they’re not trying to take over the world??). Let’s just stay reasonable about this. After all, reasoning is the trait that propels most of us in this field.

      • #3163612

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by charliespencer ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        “… the biggest indicator of code quality by whatever measure is readability.”

        Tony, I thought it was the performance of the compiled code. If it’s readable but slow, how is it high quality?

        Also, why does readability across a range of programming skills mean isuperiority? “Fun with Dick and Jane” is readable, but it isn’t superior literature.

      • #3163548

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by wilrogjr ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Hallelujah!

        I have been yelling about this one for some time.  There is nothing inherently advantageous in open source.  The problem is that today many decision makers, and indeed the media that often feeds them in some respects, have ‘bought the hype’.

      • #3163023

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by tony hopkinson ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Readability.

        If you can’t read it you can’t maintain it, enhance it, understand it, re-use it.
        The only thing you end up doing is rewriting it. Either when circumstances bring another developer on board or you go back to it yourself, having forgot how this obfuscated crap worked.
         
        Now if you were talking scripting language such as perl, terseness can be a useful perfomance enhancer, but what you gain there you will lose in development time, so you get a faster product but it costs more.

        Decent compilers don’t care how wordy you are, how big your names, how much you break out code to reduce function size etc. In fact if you design with the compiler in mind, you will actually get a performance improvement from doing it as you give the optimiser a helping hand.

        All depends on how good a coder you are of couse !
        LOL

         

      • #3152522
        Avatar photo

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by khushil.dep ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Well – those five kids will learn a great deal, be directed by others
        who know more and develop and application that can be adapted by anyone
        who sees fit to do anything they so wish. The crust old timers will pat
        themselves on the back and wonder off down the happy road of oblivion
        whilst the users suffer another BSOD and more work lost.

        “IIS is a significantly more ambitious piece of software than
        Apache” – you mean its got more bugs in right? Has the idea of software
        modularity died or someting? Do something really well. Don’t do
        everyhing badly.

        The whole point of Open Source means that when something goes wrong it
        can be fixed quicker by more brains.  It means companies can’t
        hold others to ransom – nor jepordise other businesses by their
        inaction.

        It’s about an ethos of getting everyone pitching and sharing the load
        and the glory.  The cult of self worship that closed source coders
        get into is really sad.  Genius is one idea – it means standing on
        the shoulders of giants. Giants who give you a leg up.
         

      • #3152512

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by glen.greaves ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Open Source is not some sort of training program for coders. The argument here should be is open source a better modle for developing software than close source model. Which is better 5 coding experts or 50 coding experts pooring over code? From a close source point of view it is cheaper to employ 5 than than 50. However wouldn’t the closed source model lose the knowledge and experience of the 45 experts not involved? Open Source will always be the best model for developing software.Demand for certain types of software will always dictate where the coders go. If your package hits millions of desktop users, then you are on to a winner. Do closed source projects of no value get any attention? Let me provide one simple example. Anyone heard of Valadeo Technology Corp and a great product called LiveSite (web design and content management software)? Don’t bother looking it up. They have long since disappeared, and with them LiveSite which was miles ahead of any offering from Microsoft, Adobe and Macromedia (before the buy out). What are the chances of that happening to Apache….?

      • #3153554

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by vytautasb9 ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Your arguments are logical.  One thing I noticed in the Open Source community is that there seems to be no certification or accreditation of the software.   I think industry/military organisations would seriously consider using OS software if there was some gaurantee that the OSS provides both the desired functionality and security.  Perhaps the OS community could sponsor such a certification authority and based upon an accepted security criteria issue certificates?.  A great killer app would be open source software together with a certificate of compliance.   The goal of getting software certified could also be shared by the developers of proprietary software as well. Perhaps proprietary software would improve as well from the competition? May the better software win.

      • #3152832

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by mark miller ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        I see some commenters are confusing the issue. The point that J. James is making is that it’s the people that make a difference on a project, NOT the process by which the software is made. For several years I’ve heard claims made both online and offline that OSS is inherently more stable and secure, because of the open source process of making software: the “many eyes” theory. There is some truth to this notion, but it is not universal. From what I’ve read from a few sources, developers who work on open source projects tend to focus on what’s exciting about it. There are aspects of projects that are boring, whether it be auditing the code, or focusing on a feature of some piece of software that is necessary for its functioning, but just doesn’t scratch that itch that programmers have. These areas can tend to get neglected. In other words, there are “many eyes” on some aspects of the project, but very few on others. This equation can change if the primary people working on the project are paid developers, even if the project is open source. Paid developers have more incentive to listen to direction from a competent product manager, who will insist that the boring aspects of their projects get their attention.

      • #3152797

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by wayne m. ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        “Many Eyes” Is Not Better

        Though this is really a side bar to the major thought in the blog item, I would like to point out that multiple reviewers is actually a detriment to quality.  

        It was pointed out by Dr. W. E. Deming, among others, that multiple reviewers leads to a decrease of attentiveness by individual reviewers plus a “group think” issue, where individuals become hesitant to point out issues that no one else has raised.  Having a thousand reviewers becomes no more effective than zero reviewers.

        This is by no means intended to be a swipe at open source, merely a recognition that any development approach requires an explicit mechanism to identify short-comings.  Relying on lots of reviewers simply will not get the job done.

         

      • #3152775

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by jmgarvin ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Most posts here couldn’t be more wrong. I don’t know why this
        myth keeps hanging around. Look at the security and tight code of
        Apache. Look at some of the FLOSS projects that, out of the box,
        are more efficient, secure, and effective than their counterparts.

        I suggest everyone take a read of:
        The Cathedral and the Bazaar
        Nip Nikolai Bezroukov’s card in the bud
        Open Minds, Open Source
        The Magic Cauldron

        Better Quality Control
        No more Secrets
        A good paper on the benefits and drawbacks

      • #3152752

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by hopefulcoder ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Unfortunately opensource today is just MS windows vs Linux and MS office vs Openoffice and MS IE vs Firefox. They just dont think beyond that (the open source guys). Even if they do want to opensource everything who is going to fund them :)) TO all OS guys, how about cloning websphere, rational rose, java etc ?? 

      • #3152669

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by jmgarvin ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Who’s going to fund FLOSS?  The consumers.  Those who buy the products.

        Just because it is free doesn’t mean it is free beer…

      • #3152648

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by apotheon ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        I’ll respond to points in an order opposite that in which they’ve occurred.

        1. Funding: Funding buys manpower in software design. Since open source software provides free manpower, funding isn’t really necessary. On the other hand, jmgarvin is right ? it’s entirely possible to sell open source software (and support and so on), and you can then use some of that funding to pay programmers who will improve the software.

        2. Cloning Software: There are open source “clones” of the Sun JVM. I don’t know much about Rational Rose, so I’m afraid I wouldn’t know whether there’s a “clone” of that, but I do know that it runs on Eclipse ? which is open source. There are open source systems that provide functionality equivalent to WebSphere’s, though there are significant differences (since they’re competing technologies and not merely an attempt to copy it feature-for-feature).

        3. jmgarvin’s List of Links: You forgot one. Have a look at Security through visibility: The secrets of open source security. What’s that? Why, yes, I am sort of an attention whore at times.

        4. Supposed Refutations of the “Many Eyes” Principle: You’re assuming “many eyes” in an open source development methodology are subject to the same limiting circumstances as “many eyes” in a closed source development methodology. The truth of the matter is that in a popular open source project, there are typically at least dozens of people who regularly skim bug reports and fix a bug now and then ? and those bug reports are generated when someone has trouble using the software for some reason. As a result, flaws and vulnerabilities in software get fixed without having to institute a formal review process as you would with closed source software. There’s no “groupthink” issue, no “unglamorous work” issue, and so on. When someone defines a bug for you, it’s generally pretty easy to fix it. In addition to this, people can fix their own bugs, so when a bug is noticed by someone who not only uses the software but also knows enough to be able to fix it, that person can submit a patch so that later updates to the software will include the bugfix. This adds the programmers in the software’s userbase to that “many eyes”, without suffering any of the problems that many paid eyes would create in a closed source project.

      • #3152620

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by rev.hawk743 ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Point One:
        The issue is fewer bugs.
        Point Two:
        Then when there are bugs shortest fix time.
        Point Three:
        Not repeted bugs.

        Now! which fits    Open / Closed
        In my house hold, My wife is a great person, but getting yahoo! web
        page to look as her way it should under winders 98, 98se, ME, Xp Home,
        Xp Pro, just did not work.
        But A load of Red Hat Fedora Core 2 and then Three / 60th of an hour. It was done,
        Get-hur-Done Once is best, don’t you think ?  She is sold on
        “OPEN” and she lives with no “.pps, .wmv,” and some ecards that done
        work under Mozilla.    That is fact!
        And fact is all that is real…  ……….ed   
        All other things said, that is what we paid, real working  OS, not
        junk out of Wash~ Stat. whom will charge you $495.00 to strip you
        software form you machine that you wrote / and reconstructed from poof
        up disk that 95 did to it. All the while charging to help you ~~~
        not.  Then later seeing some code in WindersAPI  for three
        button mouse that didnot even change the incorrect spellings of each
        button statements of that code  ie:
        leftwinmousebuttun, middlewinmousebuttun, rightwinmousebuttun, and some
        more incorects this a type “O” but work in the source code any
        way.   who neads to fit miss spellings when you just trying
        to get that really nice track ball working again in your own
        machine.  mnow I donot spell well at all mnumonic did not
        either.   I didnot write all the code I just fix what wash
        out stat did to it while i had exames to get to or studies for.  I
        know bad grammer is here also.  Bbut!  Time was short. and
        those classes just keeped coming around.    So they did
        not even correct the grammer either in the remarks
        statements.    Now that is “Close” statements uses of
        code after complied and leaving the symbolic parts in the C:\mouse\*.*
        directory.     Some coded because it is not cheep
        to open them as proof.   That is why wash~stat can’t open
        source anything, because very little is without problems at
        wash~~stat.  
        I’ve used many different OSes: everyone of MS; PCDOS; OS/2; Portals;
        and
        UNIX also Linuxes.   I loved OS/2 warp {3.0}{4.0} witch
        started out as joint project {MS / IBM} but was later only IBM while
        still giving MS copyright markings to programs.  
        the OPEN sources do have few weak spots in all POSITCs is most compactable.
        Compatable is what sold MS – OSes, but saywhat about office97 does not
        work with office95 or office98 and under XpHome write’s “.rtf ” don’t
        with the other Xp Pro write “.rtf ” either. So that parent brings home
        an HP w/windows XpHome for kids to do home work, on. The kids then uses
        the write program for that home work and takes to school, where 
        that school uses Xp Pro and wordpad  {write by another name}
        just  will not show that kids work to the teacher.   And
        to top it off  that  Xp Pro does not reconize  the
        floppy  which was formated  by the kids  XpHome
        computer.    And MS has still to answere that one
        yet.   That is not bugs those are software that will not be
        compatable by choice of MS design. I did the test ing my self with
        other OEM loaded machines, yes they don’t read each other floppies.
        and write does not read write outs even when sent by email to the other machine.
        My test was done useing “AOL cisnet Xphome edition” to egghead Xp Pro”.
        The Kid’s  was HP XpHome to Compaq Xp Pro.   the problem
        is that if the person formats the floppy before use then the none
        reconization is triped on. The floppy drv
        technics & panosonis under other OS no such differences showed up
        same machines were used and no hardware changed.   Why should
        we deffend this kind of programming?   P.S. all spellings
        & grammers or no perposses. The Trade Marks are of the respectfull
        owners.   And all test are not concluesive.  But there
        reason to be  of  aware….  Teachers  are not
        computer testers, why should they be.    P.S.S. 
        Please strip and edit for safer uses.

      • #3153979

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by jaqui ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        Open source, in and of itself does not produce
        better code. Better coders and better testing produce better code. It
        is that simple. When a closed source shop has the better coders and
        better testing, they write the better software. When an open source
        project has better coders and better testing, they write better
        software. To think that just because a piece of code can be modified or
        inspected by anyone and everyone means that the best coders and testers
        will be modifying and testing that code is just not correct.

        The reason that Open Source has attracted the better coders is simple, the pool is every coder in the world that they draw from. The proprietary software houses have only the pool they hire, and the best coders are not all working for the same employer.

        Yet all the best coders in the world can contribute to any open source project they want to.

        Why is most popular open source software so blessed with good coders? Because they choose to contribute to a piece of software they they see is usefull. All open source software is rated on that one point, if it’s usefull, it will be used and draw a community of people to support it. If the original creator of it dies, the software can continue to be improved. If  no-one finds the software usefull, it will die off a quick and painless death. Proprietary software is rated by how much money the company makes off it. If they don’t make enough, it doesn’t matter how usefull, or what quality it is, it’s gone.

      • #3157249

        Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        by grantwparks ·

        In reply to Open Source Does Not Make Better Code. Better Programmers Make Better Code.

        “Better code” can be written regardless of the language or platform; your point is a red herring.  I am more concerned with “producing better product”.  I also prefer to think in terms not of “open source”, but open technologies, or more precisely, W3C technologies.  I just left a document authoring project that was being implemented using MS Word’s Smart Document framework, and I couldn’t have been more disgusted.  Using markup, stylesheets and browser, a much cleaner, smaller, user-friendly and easily deployable solution could have been created six times over in the time it took us to create a kludgy, monolithic application that the users were not very pleased with.  And it frustrated me to death that every Web article I ran into while looking for help praised Smart Document as some wonderful advance in user/document/form capability.  The user interface was clearly taking ten steps backwards from what Web users have become accustomed to.  The cost to the company to write and deploy the application was much higher.  It was a lose-lose situation.The modular, unobtrusive nature of W3C technologies allows them to be smoothly combined to produce better light-weight, fast, modular solutions.

    • #3162765

      Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

      by justin james ·

      In reply to Critical Thinking

      Ed Burnette over at ZDNet likes to spend a lot of time writing about Eclipse versus NetBeans. He rarely gets into the technical merits of either piece of software, but he definitely seems to favor Eclipse. I have never used either one of these pieces of software, to be frankly honest. I tried to download Eclipse a few weeks ago, partially based upon Mr. Burnette’s high ratings of it (he and I disagree on many things, but I respect and trust his opinion), but their website is bad to the point that I had no clue what files I actually wanted to download. What I just do not "get" though, is why it really matters.

      I know, I know, it really does matter. But it should not.

      If both NetBeans and Eclipse are creating "100% Pure Java" then both of them will be creating code that could be used in the other one. Both of them should be generating code that does not rely on a particular application server or VM to be installed. Was this, or was this not the "code portability" promise behind Java? To be honest, Java as a language offers very little if the code is not portable. Java code is supposed to be portable, but in reality it is not. Sun has a bad habit of breaking backwards compatibility between Java releases. It is bad enough that Java code needs to be targeted at a particular version of a VM from a particular vendor from running on a particular application server on a particular OS.

      My last major experience with writing Java involved having to resolve differences between Tomcat/Jakarta and iPlanet. Bleh. If Java was just a slow (and it is slow) system with a miserable syntax (I love writing ten lines of code to take the place of just one line of code) I could accept it. After all, I do spend part of each day writing VB.Net and VBA code, and I can accept those languages, despite their myriad of flaws. But it is a broken system. The idea that if a user clicks on the "update your Java now!" icon, my application can/will break is horrible. The idea that if I prefer one IDE and another IDE cannot use or run my code is even worse.

      The Java crowd just does not understand this. They are so busy fragmenting and fracturing the language that Microsoft is going to run right around them. If Microsoft (or Mono) can get .Net running properly on non-Microsoft OS’s (admittedly quite a trick, considering how one of .Net’s principles is to make the Windows API usable and expose as much of it as possible in a consistent fashion), then Java has no reason to exist. Java and .Net are both system resource hogs and slow, although my experience has been that .Net is a bit less of a pig. .Net has the advantage of letting you write in whatever language you want, provided there is a .Net compiler for it. And .Net is much better about framework versions and backwards compatibility than Java is. At that point, Microsoft will just have to convince people that they are not evil and that going with .Net will not enslave them to Redmond forever.

      Personally, I do not want to see this happen. Microsoft is not a great company when it gains an overwhelming market share in a category. Microsoft is an awesome company when is has competition. .Net is a great example of this: I think .Net kicks Java with a set of spiked golf cleats "where it counts" on just about everything. I still do not think that .Net is great, but I would demand a raise if I had to spend any measurable amount of time writing Java code. But Microsoft needs the competition from Java to keep them honest. .Net would not have happened if not for Java, most likely. And the pressure from Java keeps Microsoft driving hard for improvements. As much as I dislike Java as a language, I love what it does for the IT ecosystem and economy. And I have little loyalty. If Java can show itself to be a superior system, I would not object to using it. Furthermore, I have a strong suspicion that I will be using Java on a regular basis in the near future to work with Cognos ReportNet. So it really is in my best interests to see Java be the best that it can be. To have two major IDEs producing incompatible code is not how to do that.

      • #3153612

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by matthew_web ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        It should be noted that .Net has a lot less previous versions out there in comparison to Java, so looking at levels of backward compatibility is a bit skewed…

      • #3153562

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by pan ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        Eclipse and NetBeans do produce compatible code. They both produce code than can be deployed in any Java app server, or packaged as jars that can be executed from a compliant JVM. If you use Ant for your builds it’s even easy to create a project in one IDE and then port it directly to the other.

        Do you have any examples where NetBeans and Eclipse produce code that isn’t compliant?

      • #3153550

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by tlfu ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        Insert comment text here

        I suspect that Justin is referring to code generated by the UI designers of each of the IDEs. If you use the NetBeans UI designer to develop a Swing-based UI, you cannot then switch to Eclipse and use its UI designer on the same code. This often causes Java developers to shy away from using the IDE’s UI designer tools at all.

      • #3153426

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by korkiley ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        I haven’t used it, but I see that there is a NetBeans Eclipse Project Importer now included in NetBeans.  You are also supposed to be able to create a .war file in NetBeans and then use the 

        “File > Import > WAR File” menu/wizard in
        Eclipse WTP” to import a NetBeans web project.  I haven’t used any of this stuff, nor am I a java developer.  I’ve used eclipse in the past for learning purposes and found it a bit confusing.  A quick look at NetBeans gave me the impression that it was better organized and more inutitive but that’s a very hasty judgement.  And yes, applications written in java seem to be extremely slow if StarOffice/OpenOffice is representative.  Compared to pascal and C, languages that I’ve used in the past, I’ve found it pretty tough to get off the ground with java!

        Kor

      • #3153395

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by eldergabriel ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        Although some will debate the validity of the criticism this author
        makes, i applaud him for making these points nonetheless. The
        developers who are doing the engineering to create the .Net and Java
        platforms/frameworks need to have their collective “noses held to the
        grindstone”. And that goes for IDE developers, too. ALL developers need
        to be held accountable and pressured by the market to make their
        respective platforms/frameworks/tools as simplistic-to-use, yet
        powerful, and consistently open and portable as possible. (Fortunately,
        in the case of eclipse, if there’s something an end-user programmer
        doesn’t like about it, he or she is allowed to change it, due to the
        freedom its license provides. Given this fact, if eclipse provides a
        superior way of doing things, and netbeans refuses to change, then it
        is up to netbeans to “evolve or die”.)

        End-users and programmers have every right to demand that all their
        tools and target platforms play well together. It’s constructive
        criticism like this that keeps developers on the right track. Not only
        do the end users benefit, but i think that ultimately it makes
        everyone’s lives better when all these needs mentioned above are met.

        Incidentally 1) i was also thrown for a small loop by the large and
        slightly unclear selection of packages the first time i went to
        download the windows version of eclipse (Fedora, for one, at least
        whittles it down to a few core packages and eliminates some of the
        confusion). 2) i also find the .Net framework and toolset slightly less
        overwhelming and confusing than what exists in the java world.

      • #3153344

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by fmcgowan ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        People have been predicting the imminent demise of Java for at least 10 years. It’s still here. It probably will be here in 10 more years. It is better than it was 10 years ago and will most likely be better in 10 years than it is now.

        Yes, certain classes and methods are deprecated from version to version but you generally get a warning message when you try to use one in a new project rather than an error for several point releases before the thngs are actually dropped. An exception was the change in the event model and the resulting change in event handlers when Java went from version 1.0.x to 1.1.x many years ago. The ability to compile a new program in compliance with an old JRE
        spec and the ability to install several different JRE versions on the
        same box at the same time pretty much make this argument a non-starter, anyway.

        The “there must be ONE IDE or people will be confused and go back to VB” argument is a variation on the similar argument reagrding how the KDE/Gnome UI “confusion” is keeping users on Windows. Maybe, but so what? The same applies here. Most of the people professing confusion about NetBeans vs Eclipse sound like VB programmers, anyway. OK; use the tool that can do the job and that fits your hand. If your choice is different than mine that’s OK; use .NET if you want, I’ll use Java. Use NetBeans if you like; I can still choose Eclipse. Not a problem.

        I also get a kick out of the argument that basically says that if the alternative to the MS product in any category is not *absolutely* perfect for *everyone* than it’s *useless* for *anybody*. Funny, nobody even tries to make the same argument against using MS products. I guess they *are* perfect for everyone… Right?

      • #3153976

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by jaqui ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        I actually agree with the real meaning of JJa.’s post. Until one jvm will run every java application ever written java is broken and needs serious fixing.

        get rid of the jvm options until there is one only.

        Perl, Python, PHP, Ruby all have one interpreter each, that runs all applications written. There is no excuse for the conflicts in jvms other than poor quality control.

      • #3153744

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by kirk12 ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        I fail to see how competion in the IDE space could be responsible for the failure of an entire platform. In fact isn’t it completion that has forced the other camp to finally fix some of their creeky stuff???? Long live competition!

      • #3156994

        Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        by g… ·

        In reply to Eclipse and NetBeans Need To Cooperate Or Else Java Will Lose

        I guess there allways will be people to argue but why would Eclipse and Netbeans really have to cooperate for Java to loose or not. It is Java itself that has to make things happen. As it was rightfully said here, if there’s only one interpreter, there would be nearly no problems at all. If they want to, they can create some but it would be much easier to cooperate.

         As for the .Net argument… I’m not sure it will “win” against Java in a near future (5 – 10 years) Because it is hell to make the most simple program runs on another platform and we have time before it happens.

        For the differences between Eclipse and Netbeans. I think everyone will have to agree that they’re not the same. Eclipse has much more in it. When you download the platform, it comes with PDE (plugin development environment) and JDT (Java development tools) and there are lots of thousands of plugins made for eclipse.
        I’m not aware of anything like that with NetBeans. NetBeans is “cleaner”. No doubt. And that’s probably because it is more specific not targeting so widely.

        For my part I LOVE Eclipse because you can find a plugin for anything you want and it generally takes 10 seconds to install it. I program in python, C, Java and there are Eclipse plugin for all of them also for tools like Antlr or Swing or anything you want. I allways found a plugin to help me in my work. And that’s probably the reason NetBean is better if it’s just Java you’re up to.

        To conclude I would recommand NetBeans if yoou will program Java but Eclipse if you’re doing different things and you  like to do it in the same environment.

    • #3161483

      How to Think Like a Programmer: Part 1

      by justin james ·

      In reply to Critical Thinking

      This is the first of a three part series regarding how to think like a programmer.

      Writing code involves a mode of thinking that is foreign to most people. However, writing code is not a unique process. There is actually nothing very special about it. But most people are simply unequipped to think in the manner required of good programmers. That is not to say that they are unable to put together a piece of software. Anyone with enough knowledge of the right libraries can glue together a piece of software that accomplishes some sort of goal, whether it be taking a list of URLs, fetching the results, and saving them to a specified file location, or maybe queries a database and displays the results in a particular way.

      One of the recent discussions on TechRepublic, “Has calculus enhanced your IT career?” highlights the problem with the way people think about writing software. Too many people fail to look outside of the immediate problem when writing code; they are so focused upon the trees, they do not see the forest. Writing code does not require “thinking outside of the box.” It requires a view in which the box is gigantic. Too many people, when writing code, look at the tools that they are already familiar and comfortable with, and see every problem from that perspective.

      One example of this is the passing of tabular data that is in a rigid format (such as the results of a SQL query) via XML. I have written all too often about the inefficiencies of XML as a format; it is particularly inefficient for this type of data transfer. A flat file format, such as CSV or a fixed length record format, would be created, transmitted, and parsed by the client significantly faster than an XML format. But programmers feel comfortable with XML; JavaScript has built in handlers for it, and the methods for working with it are well documented. In reality, it takes only a few minutes to write a CSV or fixed length file writer on the server side, and only a few more minutes writing JavaScript to consume it. But few people do things this way, because XML is a “chicken soup” format, it feels comfortable, it is known, and there is no work needed to use it. This is an example of the adage, “when you have a hammer, every problem looks like a nail.”

      For decades, programmers were scientists or mathematicians who ended up writing code. The mode of thought needed to write quality code is very close to the way physicists, biologists, mathematicians, etc. approach their problems. Interestingly enough, much of the literature in the field of Computer Science nearly always ends up in the realm of Philosophy. Any discussion on Artificial Intelligence is going to take about three seconds to become a philosophy discussion; there is no way around it. Some of the most interesting uses of computers for research purposes are being done by philosophers working with game theory. “Philosophy” simply means, “the study of something which cannot be quantified.” As a particular topic becomes more rigidly defined, it becomes a subset of philosophy (such as psychology in the years of Freud and Jung). Eventually, someone finds a way to quantify the topic, and that is when it becomes a science (psychology after Skinner).

      To be not just a good programmer, but a great programmer, you need to understand the scientific and philosophical ways of thinking. There just is no way around it. I worked for a company where most of the programmers had degrees in Physics and other similar scientific fields. They were excellent programmers. The programmers that I have met that came through a program where they were taught how to write code simply were not as good. Their education was based upon the syntax of a particular few languages, how to accomplish certain tasks. This is how oriented programming. These programmers simply did not understand the why of what they were doing. What Calculus (and other high level math courses) teaches someone is not just the math itself, it teaches a way of thinking about problems. This way of thinking is crucial for any programmer.

      You can be a “shake and bake” programmer by going through something like DeVry or Chubb or Kumar, or whatever. But if you do not extend your knowledge, you will have a very hard time being more than just the type of programmer who glues libraries together. You will have a hard time working on the high level projects, like Artificial Intelligence, compiler design, languages, kernels, device drivers, and so on. No one wants to hire someone to be writing machine code or assembly, or some other low-level code who will be writing inefficient code, or who does not understand the principles of algorithms, and so on.

      J.Ja

      This is the first of a three part series regarding how to think like a programmer. In the next part of this series, I will discuss some techniques for expanding your thinking skills to be a better programmer. In the third part, I will cover the difference between shot term expediency and long term solutions.

      • #3162426

        How to Think Like a Programmer: Part 1

        by wayne m. ·

        In reply to How to Think Like a Programmer: Part 1

        Three Kinds of Thinking

        I would say the difficulty of “thinking like a programmer” is that one needs to marry three types of thinking: why, how, and what.

        I would characterize “why” thinking as understanding the desired functionality.  “How” thinking is the mechanics of design, both the syntax of the language and common design approaches and architectural patterns.  “What” thinking is understanding the implications of various choices this typically affects both performance of the software and of the people using the software.  Consider some extreme stereotypes of the various types of thinking. 

        Why thinking solves the stated problem exactly with not generalization nor commonality.  This results in the so-called “big ball of mud code.”  A typical example is a shell script that has grown too big.

        How thinking focuses on structure.  This results in pretty and symmetrical UML diagrams that only just meet the requirements, with lenient definitions of “meet.”  A typical example is the inability to meet a simple request simply.

        What thinking focuses on perfromance.  This results in terse code, often requiring very few key strokes and extensive detailed knowledge of a particular language.  Typical examples are published as “Obfuscated C” examples.

        The difficulty of programming is that it requires all three types of thinking: Why, How, and What.  One has to look at a solution from all three directions.  It takes a lot of experience to switch modes of thinking and compare and contrast approaches to find the best fit of differing constraints.

         

      • #3152622

        How to Think Like a Programmer: Part 1

        by apotheon ·

        In reply to How to Think Like a Programmer: Part 1

        I see nothing in particular with which to take exception in your post, J.Ja. In particular, though, I’ll comment on the calculus issue:

        Calculus itself doesn’t really teach any skills that are needed. In fact, higher algebraic maths would be even more helpful than calculus. What’s helpful about learning calculus is the mathematical logic it teaches, and what’s helpful about the mathematical logic is the fact that it’s logic. As such, something like Linear Algebra would be more helpful than Calculus II. On the other hand, a course in Symbolic Logic would be about as useful as an equivalent-level algebra course — beyond basic algebra and logic, mathematics is not very useful to programming skill at all.

        Calculus ensures a fairly solid grasp of logic and algebra, which is a good thing for programming, to be sure. There are other ways to get the same skills, though, without having to estimate the volume of an irregularly shaped container.

      • #3154102

        How to Think Like a Programmer: Part 1

        by kirk w. ·

        In reply to How to Think Like a Programmer: Part 1

        I find it interesting that you promote Calculus as important to be a great programmer, yet you say that the excellent programmers had degrees in Physics.  Calculus is necessary for a Physics degree, and my experience has been that it is the scientific way of thinking, that makes for excellent programmers, not the mathematical way of thinking.  Mathematics does not teach the deep analytical thinking that science does.  It is the development of a hypothesis, performing an experiment, and analyzing the results that are the basis of great programmers (e.g. analyze requirements, design code, test driven development). 

      • #3161069

        How to Think Like a Programmer: Part 1

        by staticonthewire ·

        In reply to How to Think Like a Programmer: Part 1

        It appears that you are trying to quantify what defines a good
        programmer (How to “think” like a programmer is the same as how to “be” a good programmer, yes?). This is a hopeless activity – charming in a way, and
        bemusing; it will fill many hours of time if you let it.

        But it isn’t truly productive in the long run, because programming itself remains undefined.

        Programming remains undefined for good reason: It isn’t a single
        activity, it isn’t a monolithic discipline. The characterization of a
        good database programmer is wildly different from that of a good web
        programmer or game programmer or widget programmer, or AI programmer,
        and if you reduce your list of characteristics to what is common to all
        sorts of programming you end up with a plain vanilla list of things
        that are good in ANY discipline… “good programmers are…
        CAREFUL…”. Yeehah.

        Programming is achieved with language; any language worth the name is
        an open system, open systems cannot be meticulously defined, they can
        only be mastered – ask Hilbert and Russell, who spent half their
        professional lives trying to formalize mathematics before Godel upset
        the applecart.

        Scientific background is certainly helpful to a programmer; it is also
        helpful to an accountant, and even to a priest, and probably to a broom
        jockey too. Creativity is also helpful in navigating most disciplines,
        but saying things like this isn’t really saying anything useful.

        I think you’re pursuing an interesting line of thought, but I think you’ve
        framed it too narrowly, creating a figure/ground confusion for yourself
        – mistaking the dollar for what it buys. You correctly identify the
        point of phase change, when a discipline shifts from
        philosophy/folklore/religion to science – it’s when quantification and
        repeatability creeps in. But programming per se can never BE a science,
        any more than English can be a science, or Urdu, or Tagalog. The STUDY
        of English can be a science, and the STUDY of programming can be a
        science, but a language itself, whether C++ or French, is not science.
        There’s a difference between biology (a science) and elephants (one of
        the objects biologists study).

        Certain efficiencies can be achieved by programmers if they acquire
        analytical skills and problem solving skills, and these skills are also
        common to many scientific disciplines, but they are not a priori
        scientific methods, they can be applied to landscape painting or
        cooking or burping babies too. I have frequently found myself using the
        scientific method of theory/test/repeat while debugging, or when
        building a special use compiler, but I also use it in analyzing the path I take from my front door to the train station. Programming is more like music
        than anything else, I think – part math, part inspiration, part luck,
        part muse.

        The example you gave of reducing tabular XML data to string data and
        parsing it so – this is not a balanced example. In a large organization
        dependent on a phalanx of programmers doing things transparently, the
        efficiencies of your one-off string parser are rapidly lost in the
        welter of confusion that such cowboy antics can create in a big shop.

        Big shoe
        factories don’t employ cobblers, no matter how artistic/elegant/efficient their work may
        be. But certainly there is room in the world for custom shoes crafted
        in unique ways, and there will always be a boutique market for custom
        code of the sort you describe. XML is just a way to get data from point
        A to Point B, better than some ways, worse than others, and most
        importantly better sometimes and worse sometimes. And once again we run
        into the impossibility of thoroughly quantifying the needs of “this
        discipline we call code”. Sometimes ya need one thing, sometimes
        another.

        Your contention that Skinner quantified Psychology is just wrong
        – the bastard learned how to brainwash pigeons, for heaven’s sake,
        something every major religion achieves in its infancy, and a human
        legacy thousands of years old. He sure used big words to describe it
        though, didn’t he? Much of his work has been discredited or
        dramatically reduced in scope, along with many of his data collection
        methods. Most importantly, he failed to account for feedback – the
        effect an organism can have on itself. He refused to consider internal
        stimuli at all.

        He made one truly interesting discovery: His Paradoxical and
        Ultra-Paradoxical Phases, when a stimulus would no longer be required
        in order to evoke a conditioned response, or would evoke
        counter-intuitive responses. Don’t sound like science to me. I’m afraid
        psychology (define “psychology”) remains at best a mendicant’s art, and
        it’s practitioners are mostly self-deluded charlatans and pill pushers.

        I think you’re several decades ahead of your time, J.Ja. – might be
        better to define the field of endeavor – programming – before you start
        in on the qualifications necessary for its practitioners.

        This doesn’t mean the discussion is useless – there are many
        interesting things about mathematics and much has been achieved with
        it, despite it being unquantifiable and inherently intractable when
        subjected to analysis. Little arenas of clarity can be staked out in
        the chaotic jungle or our discipline, and they should be. But don’t
        imagine that the rules that apply in one arena necessarily apply to the
        clearing just down the road…

      • #3160730

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        staticonthewire said:

        The STUDY of English can be a science, and the STUDY of programming can be a science, but a language itself, whether C++ or French, is not science.

        Maybe I misunderstand you, but C++ or any programming language is based on scientific and mathematical principles. In contrast French or any human language (perhaps with the exception of Esperanto) is not. Programming languages are based on a structure called a context-free grammar. French and other human languages have context grammars. It’s been a while, so I don’t remember exactly what “context-free” means when talking about programming languages, but I know it has to do with disambiguating the language so that a compiler or interpreter can parse the code and produce logically consistent results. This may be a bad analogy, but the one that came to mind is it gets rid of the “Who’s on first” (referring to the Abbot and Costello routine) types of confusion. Programming languages have been fashioned in such a way that programs written in them always reduce down to a string of symbols (if the program conforms to the grammar correctly), which then may be translated to machine code, that can be executed in a logical fashion by a computer exactly as specified by the source language. So there is a science to creating programming languages.

        How a programming language is used is not restricted to scientific methods, though they can be applied to it.

      • #3160566

        How to Think Like a Programmer: Part 1

        by staticonthewire ·

        In reply to How to Think Like a Programmer: Part 1

        Two excellent points, Mark, but my interest (and my post) were based on
        the second of your two points. It’s true that all useful programming
        languages are based on a defined context-free grammar, but this doesn’t
        mean that all productions of that grammar can be formalized
        (which is your second point, and the one I emphasized in my post). I’m
        not so interested in how a language is created – mathematics is also
        based on a formal context free grammar, and while it’s fun to play
        around with Backus-Naur formats and come up with little languages of
        all sorts, the real interest for me is in what a language can produce. As you point out, even formally defined linguistic systems have an infinite, open ended production list.

        And that’s why I was bemoaning J.Ja’s campaign to quantify the characteristics/qualities of a good programmer.

        The one thing all computer languages have in common – even the
        “symbolic” languages and the cut-n-paste/drag-n-drop tools – is that
        they are all languages. And languages, as we have both now
        pointed out, are unconstrained production systems. This, it seems to
        me, indicates that the skills a programmer of ANY language would need
        would depend on what production subset they were specializing in. This
        is what I meant when I said that different kinds of programming require
        different skill sets.

        This sort of defeats J.Ja’s whole campaign, since, if a language is
        capable of infinitely many productions, it follows that there are
        infinitely many production subsets, thus, infinitely many skill-sets,
        thus an enormous difficulty in defining a non-contextual and still
        useful set of programmer skills or programmer qualifications.

        J.Ja’s overall goal (the formalization of characteristics for a good
        programmer), like David Hilbert’s and Bertie Russell’s, seems
        impossible to me. But as I also noted, things of value can still come
        from the attempt.

        Maybe I’m taking this ‘way past the point J.Ja intended, but I don’t
        think so, because he specifically refers to the phase change between
        scientific and non-scientific endeavors, and I assume he did that for a
        reason – i.e., he wishes to place the defining (and eventually,
        perhaps, things like hiring) of programmers on a scientific
        footing. I was simply pointing out that this seems premature – perhaps
        by decades… and in the long run, a bit Quixotic, yes? Since it really
        boils down to an attempt to number an infinite system.

        BUT – I applaude his industry, and suspect we will all learn once again what we already know – you cannot formalize a list of characteristics or skills that fully encompass what it means to be a top-flight programmer. Not in this day and age.

        And you should thank your lucky stars that it is so…

      • #3158793

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        I agree Justin is defining a “good programmer” too narrowly. He’s thinking of programmers as those who work on OS kernels and compilers, or device drivers. Most programmers out there don’t work on this sort of thing, and wouldn’t get the opportunity if they tried (unless they wanted to write these things themselves or join the Linux kernel team or the team developing a programming language).

        I take exception to the notion that physicists make great programmers. I worked with a programmer whose only degree was in Physics. He wrote code that worked, but it was difficult for anyone else to understand. The setting was a company that wrote custom business accounting solutions for a vertical market. He was notorious for writing the following in C (I put in comments to say something happens without actually doing it):

        int Read(char *a, int b)
        {
           int i = 0;
           int c = 5;
           for ( ; i < b + c; i++)
           { /* do something with a */ }
        }

        or,

        {
           while (1)
           {
              if (cond1)
              { /* do some stuff */ break;}

              if (cond2)
              (/* do some stuff */ break;}
           }
           break;
        }

        He just used the while structure so he could use break statements to skip past the bottom of the loop structure once a step was taken.

        Besides the function name, the rest of his code was cryptic. He never learned structured programming, and he got complaints about his one-letter variable names, which were only understandable to him. This made working as a team with him difficult. He was a nice guy, and easy to work with on a personal basis. But by our criteria he was not a great programmer.

        Another programmer I heard about a long time ago, working at a small software manufacturer, was a genius. He could write working, highly functional programs in a matter of a few days that would’ve taken any of the rest of us a few weeks at least. I doubt it had to do with his programming technique, or that he was a faster typist than us. He just had the ability, IMO, to work the whole problem out in his head and then just “stream” the solution into working code. The problem was nobody else could read what he wrote. He programmed in C, but he would use labels and goto’s to implement loops, when a for or while loop would’ve done the job just as well. It was as if he had learned how to program in either BASIC (old-style with line numbers) or assembly language, and just applied the techniques learned in those languages to whatever language he was programming in. It made his code utterly unmaintainable. We were getting complaints about memory leaks in the software, but we had no easy way, by looking at his code, of finding where the problem might be. The only senior engineer there, while I was there, spent a significant amount of his time rewriting this programmer’s code, as structured code, so it would be readable.

      • #3158762

        How to Think Like a Programmer: Part 1

        by staticonthewire ·

        In reply to How to Think Like a Programmer: Part 1

        I also had trouble with J.Ja’s criteria and definitions, for similar
        reasons. I think science in general, math in specific, and the
        scientific method (in the design and debugging context) are particularly
        pertinent to programming, but as I mentioned, music also seems to be a
        good background; I’ve noticed especially that recording engineers –
        people that work with multi-tracking music software and the layering of
        sound – seem to be able to visualize a program in toto, once they’re bitten by the coding bug.

        Over the decades I’ve seen people from all walks of life with all sorts
        of backgrounds end up as programmers, good and bad. I was once
        responsible for a community college adult education “programming”
        course, and I started the first semester with a dim view of the
        potential I would be dealing with. But I was amazed at how many of
        these Average Janes and Joes were good at it, and it was constantly
        surprising to see who was good at what. A middle-aged matron with an
        Agatha Christie jones was our best debugger, for example. Tenacious.

        We never took programming very far, but it was clear from the beginning
        that some members of the class had a real flair for what we were doing,
        and loved it. Then there was a real interesting transition group – they
        were perfectly capable of understanding the concepts and process, but
        the whole business left them completely cold. And of course there were
        people who never did quite get what a variable was, and left after the
        first month…

        What intrigued (and concerned) me most was that transition group – the
        people who got it, but didn’t care. They were by far the largest
        segment of the class. It was then that I had my first inkling of what
        the main qualification for a good programmer might be – you gotta LOVE
        it. You gotta get a kick out of it. Once you have that, it’s just a
        matter of growth and native gifts. The ebb and flow of the business
        determines where you go. But that love of the process determines how you grow.

        J.Ja. starts his post by saying that it takes a special sort of person to code – “Writing
        code involves a mode of thinking that is foreign to most people” was
        how he put it. I used to think so too, and it pleased me greatly to
        consider myself somehow special. But my experience teaching night
        school stripped me of that particular conceit. Most people in that
        class could code, and competently, if they thought it was important. But of those who could code, very few wanted to code.

        So it isn’t a “mode of thinking” that makes the difference. It’s
        desire, passion, interest. As a profession, we should be looking for
        ways to communicate how exciting programming can be, if we want to
        encourage people to understand it and enjoy it with us. I think this
        might be harder than creating a list of qualifications, but maybe not.

        Programming is everywhere – word macros, object scripting in games, in
        3D Studio, in spreadsheets, in web pages; its all over the place, in
        hundreds of variations. There’s no common skill-set that encompasses my
        teenaged son hacking Runescape servers AND my ex-wife’s mother building
        Excel macros AND me, writing real-time risk analysis software for day
        traders. Sorry. The only common element is – lust. Avidity. The desire
        to make something happen, the realization that you can do it, the
        willingness to make the attempt…

      • #3159342

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        To staticonthewire:

        I think you’re right that passion is key. It sounds like my experience. I always had a love for machines and science. As a child I was leaning towards becoming a meteorologist. I’ve sometimes said that if computers didn’t exist, I’d probably be one of those storm chasers now. My first encounter with a computer was mesmerizing. I was 7 or 8. Learning to program one though, at the age of 11 was intimidating. I first learned to program on computers available at my local library. What really drew me to it was seeing other people program these computers, and seeing what they were doing with them. I loved the idea of making the machine do what I wanted it to, so I kept at it. I made a LOT of mistakes. There were many days where I felt worn out and defeated, because something was going wrong with what I wrote but I couldn’t figure it out. I never got to the point of giving up though. It was like an itch I had to scratch. Getting a program all debugged and running was GREAT! It was exhilerating. I think the reason it felt so great is I realized how hard it was.

        Fortunately I had some help too. The library had a few computers in a room, so other people there would lend me a hand.  After a few months I finally got to the point where I could write a program on my own without much hand-holding from helpers. Then I found a programming magazine that had what were called “type-ins”. They published the source code for a program, and the accompanying article explained what the code did. A great learning tool. Not too long thereafter I started writing little word game programs. Then I started writing programs with features I thought would be genuinely useful to others. Things progressed. I got my computer science degree in the early 1990s and I started working in IT software. That’s where I am today.

        What made programming exciting to me was 1) knowing that the option was available, and 2) I got the opportunity to watch other people program, and watch their programs evolve as they fixed bugs and added features. Most of the initial programming I saw done was in graphics or gaming. The visual aspect is very exciting, and to actually watch it work under the control of a programmer is exciting. Heck, just seeing a program run, and allow me to interact with it, got me hooked on computers as a kid. What got me programming though was seeing the cause and effect, seeing the process that a programmer goes through to get something working. Another hook for me was I could relate to the end product.

        One of my first memories of watching a programmer do his thing was watching a man sitting at one of the computers. I sat there observing what he was doing. He would type things in, and then run his program, repeatedly. He had already written a fair amount of code, so a concept was there. I tried to guess what he was trying to accomplish. It looked like he was trying to create a horse racing game, but the problem he was running into was that as the horse figures moved across the screen, the old copies stayed on the screen. Secondly they went across the screen very quickly. It didn’t have a sense of suspense that one expects with a horse race. The programmer was trying to figure out how to erase the old copies so it looked like movement, rather than stamping a bunch of horse figures on the screen, and how to slow it down. If he had just been working on something high level, like statistical analysis, I don’t think I would’ve been as interested. I wouldn’t have understood it at that age.

        In terms of multidisciplinary qualities, I found in college that I was good at music theory. I had participated in orchestra in public school, where it was available, but I was able to understand music at a symbolic level, which is a significant part of music theory. My professor commented that he noticed over the years that people who were good in math were also good at music theory. I’d say I was good at math, but not great.

      • #3159292

        How to Think Like a Programmer: Part 1

        by staticonthewire ·

        In reply to How to Think Like a Programmer: Part 1

        So Mark, we can agree that at least in part, the “special sauce” that makes
        for a good programmer consists of the pleasure we take in the game.
        From your anecdotes, I would guess that early exposure would also help,
        although it doesn’t seem to be required (in my case, my father was an
        E.E., and I could build logic circuits before I could ride a bike). A
        minimal seat-of-the-pants ease with math is probably another enabler.

        But this may not be what J.Ja is after. I’ve been trying to understand
        what he’s after – why he’s examining the issue at all. I thought
        initially that he was trying to assemble a list of criteria that would
        make it easier for him to winnow resumes while hiring. When I read
        through parts 2 & 3 of his post it seemed more like he wanted to
        work up some sort of syllabus or study guide that we could all use to
        raise our level of competency. But in the end I can’t really determine
        what his context is, and that makes his post it harder to address.

        I have a personal interest in the question though, which is why I
        postedat all (I very seldom post on TR). I’ve been in the business all
        my life. I remember when the cutting edge in hard drive technology
        consisted of a vertical aligned ceramic coated aluminum disk that held
        about 250k – you couldn’t say definitely, because we all loaded our own
        disk I/O programs separately, and some were built for comfort and some
        were built for speed. The disks were kept under big metal hoods because
        periodically the ceramic coating on those big 24″ diameter disks would
        disintegrate and send shards of crap flying all over the place if they
        were uncovered.

        So I’ve been in the biz for some time, and have watched lots of
        programmers come and go. In all the decades I’ve been at it, I have
        experienced almost spiritual highs in the process of
        programming. Few things compare favorably to the sensation of grasping
        an entire system, creating it, making it all work, and seeing it help
        people. But at the same time, I’ve found myself increasingly isolated,
        completely unable to communicate what it is that I find so terribly
        consuming.

        Most people seem to think that programming is something like
        accounting, and their eyes glaze over when you start talking about it.
        Even most people who program for a living don’t get it because they’re
        what J.Ja calls – what was his term? It was good… oh yes – “shake n
        bake” programmers. Most programmers I run into cut code 9-5 and have
        never had the experiences J.Ja indicates when he lists things like AI,
        device kernels and so on – at best, they have had a very limited
        exposure to exploratory programming and haven’t had the eureka moment –
        nothing has ignited their passion; they’re in the biz because the money
        looked good, or because it looked too hard to study law…

        For years I’ve tried to understand why programming – which seems to me
        to be shot through with all the colors of the rainbow – can seem so
        colorless to the world at large. Maybe J.Ja is barking up the same tree?

      • #3159054

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        To staticonthewire:

        Since you had trouble understanding what Justin was getting at, I think I can summarize:

        The first part is philosophical. Justin is advocating for efficiency and elegance. He thinks this can only really be achieved if programmers take skills learned from other disciplines, like philosophy and math, and apply them to their work. I think he’s also saying that great programmers have an intuitive sense about problems. They don’t just insist that they need to go down one line of reasoning to solve a problem. They explore other possible solutions, and try to pick the best one. Perhaps the reason he thinks math helps is that it emphasizes simplification, reducing more complex mathematical expressions to simpler ones. Good math courses encourage students to search for the best rule/formula that will solve the problem. He also says they should take a scientific approach to solving problems: try an idea, and observe how well it works, repeat.

        In Part 2:

        He says when you are forced, in an educational environment, to try and build basic things with little to no library support, you are confronted with how to write these functions efficiently, because you end up using the functions you are building, often. This is in contrast to the “shake ‘n bake” programmers he speaks of, since they’re just spoon-fed the APIs, which do a lot of work for them. They don’t bother to explore how the APIs they use work, and therefore don’t understand their pros and cons.

        He gets into code efficiency (lines of code) in his Perl example.

        In his open source project example, he emphasizes the need for a consistent, structured programming style.

        Next he emphasizes learning a bunch of languages, adding them to your toolbox. He suggests using different languages together, since some languages are very good at certain tasks within a project.

        Lastly he says keep learning, expanding your horizons, have a passion for learning.

        In Part 3:

        He says good programmers think long-term, and design their software accordingly. Further, they code so others can read it.

        Programmers should think about memory usage that comes as a result of their actions. He uses the example of loading up a bunch of libraries to utilize one easily replicated function. What he means is there are situations where if you load a code library to access a function, that the library will have dependencies on more libraries, which will also get loaded. If the function is simple enough to just rewrite, without all the library dependencies, then you should do that, as opposed to bringing in all that overhead.

        Programmers should think of processing time for their solutions. He uses XML as an example of something that’s tempting to use, but is processor-expensive. He advocates writing custom CSV code instead for transporting tabular data. The code will run faster.

        Programmers should be conscious of how their language compiler/interpreter handles certain structures. Doing things that seem benign can have negative effects on performance.

        In short, in this last part he’s saying that programmers need to be conscious of the underlying implications of what they’re doing. Logically, at a high level, everything they do makes sense, but when you dig deeper, they’re creating programs that use a lot more memory than they need to, and run slower than they should. Programmers need to think about this.

        Overall, he advocates for efficiency, efficiency, efficiency, and elegance. I think what Justin is also advocating is that great programmers are in a sense trained in computer science, because everything he advocates is taught in a good CS curriculum: good structured code, descriptive variable names (and comments–he didn’t mention this), emphasis on efficient use of algorithms, learning how to write efficient code, learning how programs execute, and how compilers/interpreters translate/execute programs, learning a variety of programming languages and see how each has certain advantages and disadvantages, and learning how to use different programming languages together (at least they used to teach this).

        In all the decades I’ve been at it, I have experienced almost spiritual highs in the process of programming. Few things compare favorably to the sensation of grasping an entire system, creating it, making it all work, and seeing it help people. But at the same time, I’ve found myself increasingly isolated, completely unable to communicate what it is that I find so terribly consuming.

        Huh. I would’ve thought the people who got into it for the money would be long gone by now. IT hasn’t been the best place to be in terms of worker morale and pay.
        Interestingly the only programmers I’ve met who for the most part see the programming world as “colorless”, as you put it, are female. I haven’t met many female programmers, and the few I’ve talked to have other interests they’re more passionate about. I’ve asked them if they’ve written programs on their own time. Without fail they’ve said no. In fact they think I’m strange for asking the question, like, “Why would I want to do THAT?” One lady I asked thought of programming like accounting, just like you said. So I could understand why she wouldn’t see it as a recreational activity. This isn’t to say they were necessarily bad programmers or unskilled. They just didn’t have the passion for it. Please understand I’m not trying to cast aspersions on all female programmers. I wish there were more of them in our profession. This has just been my experience.

        For years I’ve tried to understand why programming – which seems to me to be shot through with all the colors of the rainbow – can seem so colorless to the world at large. Maybe J.Ja is barking up the same tree?

        I think you are both barking up the same tree. He’s describing the frustration he’s encountered from dealing with people who are programmers but care only enough to meet the minimum standards necessary to not get fired. They don’t have any ambition to become better programmers. To them it’s just a job. They don’t care about the craft.

        Just recently I read some articles by Joel Spolsky, Paul Graham, and others who complain about something similar to what you and Justin have talked about. They take a slightly different tact in they put the blame squarely on the Java programming language. They claim that Java enabled schools to spring up that teach people just enough to get something done, but otherwise they produce programmers who are unenlightened and messy. They blame Java, because they claim it handles all the messy stuff that programmers used to have to worry about, like memory management. Come to think of it, I don’t understand why they don’t direct their ire at Visual Basic as well. It shares some of these qualities with Java, not to mention .Net. On the upside for corporations, they said, Java enabled a lot more programmers to be produced. During the ’90s this was seen as a boon, since companies complained of a shortage (they still do). The result though, they said, was a lot of second-rate software, failed projects, and failed companies.

        If you’re interested, the web page that directed me to these articles is at: “Why Java is not my favorite programming language” http://web.onetel.net.uk/~hibou/Why%20Java%20is%20Not%20My%20Favourite%20Programming%20Language.html

        Interestingly, several of these authors had good things to say about Lisp, especially Paul Graham, who is a Lisp advocate. Graham’s account of his use of Lisp in his start-up years ago, called ViaWeb (now Yahoo! Store), is compelling.

        I programmed in Lisp a bit in college, but it was so alien to me, plus I had a terrible teacher for the class, and the materials I was given to learn with were not good. It turned me off to Lisp. But these guys convinced me to give it a second chance. I’ve been taking up learning it again just recently. The selling points for me were that Lisp has been, and continues to be used in some real world projects, and not just for AI. Secondly, at least one of the folks said that Lisp is a good language to learn, because it teaches you to be a better programmer, even if you never end up using it in a real project, like what Justin said about exposure to mathematics. The disadvantage Lisp has is most programmers don’t know it, or avoid it, due to it being so different from the dominant languages. A few who have tried it complain that it lacks robust tool and library support.

      • #3159052

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        To staticonthewire:

        Since you were having trouble understanding what Justin was getting at, I believe I can summarize:

        The first part is philosophical. Justin is advocating for efficiency and elegance. He thinks this can only really be achieved if programmers take skills learned from other disciplines, like philosophy and math, and apply them to their work. I think he’s also saying that great programmers have an intuitive sense about problems. They don’t just insist that they need to go down one line of reasoning to solve a problem. They explore other possible solutions, and try to pick the best one. Perhaps the reason he thinks math helps is that it emphasizes simplification, reducing more complex mathematical expressions to simpler ones. Good math courses encourage students to search for the best rule/formula that will solve the problem. He also says they should take a scientific approach to solving problems: try an idea, and observe how well it works, repeat.

        In Part 2:

        He says when you are forced, in an educational environment, to try and build basic things with little to no library support, you are confronted with how to write these functions efficiently, because you end up using the functions you are building, often. He says this to contrast against the “shake ‘n bake” programmers, because they’re just spoon-fed the API. They haven’t looked deeply into the API, or read up on them, to consider the pros and cons of certain functions.

        He gets into code efficiency (lines of code) in his Perl example.

        In his open source project example, he emphasizes the need for a consistent, structured programming style.

        Next he emphasizes learning a bunch of languages, adding them to your toolbox. He suggests using different languages together, since some languages are very good at certain tasks within a project.

        Lastly he says keep learning, expanding your horizons, have a passion for learning.

        In Part 3:

        He says good programmers think long-term, and design their software accordingly. Further, they code so others can read it.

        Programmers should think about memory usage that comes as a result of their actions. He uses the example of loading up a bunch of libraries to utilize one easily replicated function. What he means is there are situations where if you load a code library to access a function, that the library will have dependencies on more libraries, which will also get loaded. If the function is simple enough to just rewrite, without all the library dependencies, then you should do that, as opposed to bringing in all that overhead.

        Programmers should think of processing time for their solutions. He uses XML as an example of something that’s tempting to use, but is processor-expensive. He advocates writing custom CSV code instead for transporting tabular data. The code will run faster.

        Programmers should be conscious of how their language compiler/interpreter handles certain structures. Doing things that seem benign can have negative effects on performance.

        In short, in this last part he’s saying that programmers need to be conscious of the underlying implications of what they’re doing. Logically, at a high level, everything they do makes sense, but when you dig deeper, they’re creating programs that use a lot more memory than they need to, and run slower than they should. Programmers need to think about this.

        Overall, he advocates for efficiency, efficiency, efficiency, and elegance. I think what Justin is also advocating is that great programmers are in a sense trained in computer science, because everything he advocates is taught in a good CS curriculum: good structured code, descriptive variable names (and comments–he didn’t mention this), emphasis on efficient use of algorithms, learning how to write efficient code, learning how programs execute, and how compilers/interpreters translate/execute programs, learning a variety of programming languages and see how each has certain advantages and disadvantages, and learning how to use different programming languages together (at least they used to teach this).

        In all the decades I’ve been at it, I have experienced almost spiritual highs in the process of programming. Few things compare favorably to the sensation of grasping an entire system, creating it, making it all work, and seeing it help people. But at the same time, I’ve found myself increasingly isolated, completely unable to communicate what it is that I find so terribly consuming.

        Huh. I would’ve thought the people who got into it for the money would be long gone by now. IT hasn’t been the best place to be in terms of worker morale and pay.

        Interestingly the only programmers I’ve met who for the most part see the programming world as “colorless”, as you put it, are female. I haven’t met many female programmers, and the few I’ve talked to have other interests they’re more passionate about. I’ve asked them if they’ve written programs on their own time. Without fail they’ve said no. In fact they think I’m strange for asking the question, like, “Why would I want to do THAT?” One lady I asked thought of programming like accounting, just like you said. So I could understand why she wouldn’t see it as a recreational activity. This isn’t to say they were necessarily bad programmers or unskilled. They just didn’t have the passion for it. Please understand I’m not trying to cast aspersions on all female programmers. I wish there were more of them in our profession. This has just been my experience.

        For years I’ve tried to understand why programming – which seems to me to be shot through with all the colors of the rainbow – can seem so colorless to the world at large. Maybe J.Ja is barking up the same tree?

        I think you are both barking up the same tree. He’s describing the frustration he’s encountered from dealing with people who are programmers but care only enough to meet the minimum standards necessary to not get fired. They don’t have any ambition to become better programmers. To them it’s just a job. They don’t care about the craft.

        Just recently I read some articles by Joel Spolsky, Paul Graham, and others who complain about something similar to what you and Justin have talked about. They take a slightly different tact. They put the blame squarely on the Java programming language. They claim that Java enabled schools to spring up that teach people just enough to get something done, but otherwise they produce programmers who are unenlightened and messy. They blame Java, because they claim it handles all the messy stuff that programmers used to have to worry about, like memory management, but it leads to carelessness. Come to think of it, I don’t understand why they don’t direct their ire at Visual Basic as well. It shares some of these qualities with Java, not to mention .Net. On the upside for corporations, they said, Java enabled a lot more programmers to be produced. During the ’90s this was seen as a boon, since companies complained of a shortage (they still do). The result though was a lot of second-rate software, failed projects, and failed companies. They also emphasize the virtues that are taught in the older computer science curriculum. One of them claimed that Java has kind of corrupted computer science at universities, because it de-emphasizes some of the old virtues.

        If you’re interested, the web page that directed me to these articles is at: “Why Java is not my favorite programming language” http://web.onetel.net.uk/~hibou/Why%20Java%20is%20Not%20My%20Favourite%20Programming%20Language.html

        Interestingly, several of these authors had good things to say about Lisp, especially Paul Graham, who is a Lisp advocate. Graham’s story of his start-up where he used Lisp extensively years ago, called ViaWeb (now Yahoo! Store) is compelling.

        I programmed in Lisp a bit in college, but it was so alien to me, plus I had a terrible teacher for the class, and the materials I was given to learn with were not good. It turned me off to Lisp. But these guys convinced me to give it a second chance. I’ve taken up learning it again recently. The selling points for me were that Lisp has been, and continues to be used in some real world projects, and not just for AI. Secondly, at least one of the folks said that Lisp is a good language to learn, because it teaches you to be a better programmer, even if you never end up using it in a real project, like what Justin said about exposure to mathematics.

      • #3159051

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        To staticonthewire:

        Since you were having trouble understanding what Justin was getting at, I believe I can summarize:

        The first part is philosophical. Justin is advocating for efficiency and elegance. He thinks this can only really be achieved if programmers take skills learned from other disciplines, like philosophy and math, and apply them to their work. I think he’s also saying that great programmers have an intuitive sense about problems. They don’t just insist that they need to go down one line of reasoning to solve a problem. They explore other possible solutions, and try to pick the best one. Perhaps the reason he thinks math helps is that it emphasizes simplification, reducing more complex mathematical expressions to simpler ones. Good math courses encourage students to search for the best rule/formula that will solve the problem. He also says they should take a scientific approach to solving problems: try an idea, and observe how well it works, repeat.

        In Part 2:

        He says when you are forced, in an educational environment, to try and build basic things with little to no library support, you are confronted with how to write these functions efficiently, because you end up using the functions you are building, often. He says this to contrast against the “shake ‘n bake” programmers, because they’re just spoon-fed the API. They haven’t looked deeply into the API, or read up on them, to consider the pros and cons of certain functions.

        He gets into code efficiency (lines of code) in his Perl example.

        In his open source project example, he emphasizes the need for a consistent, structured programming style.

        Next he emphasizes learning a bunch of languages, adding them to your toolbox. He suggests using different languages together, since some languages are very good at certain tasks within a project.

        Lastly he says keep learning, expanding your horizons, have a passion for learning.

        In Part 3:

        He says good programmers think long-term, and design their software accordingly. Further, they code so others can read it.

        Programmers should think about memory usage that comes as a result of their actions. He uses the example of loading up a bunch of libraries to utilize one easily replicated function. What he means is there are situations where if you load a code library to access a function, that the library will have dependencies on more libraries, which will also get loaded. If the function is simple enough to just rewrite, without all the library dependencies, then you should do that, as opposed to bringing in all that overhead.

        Programmers should think of processing time for their solutions. He uses XML as an example of something that’s tempting to use, but is processor-expensive. He advocates writing custom CSV code instead for transporting tabular data. The code will run faster.

        Programmers should be conscious of how their language compiler/interpreter handles certain structures. Doing things that seem benign can have negative effects on performance.

        In short, in this last part he’s saying that programmers need to be conscious of the underlying implications of what they’re doing. Logically, at a high level, everything they do makes sense, but when you dig deeper, they’re creating programs that use a lot more memory than they need to, and run slower than they should. Programmers need to think about this.

        Overall, he advocates for efficiency, efficiency, efficiency, and elegance. I think what Justin is also advocating is that great programmers are in a sense trained in computer science, because everything he advocates is taught in a good CS curriculum: good structured code, descriptive variable names (and comments–he didn’t mention this), emphasis on efficient use of algorithms, learning how to write efficient code, learning how programs execute, and how compilers/interpreters translate/execute programs, learning a variety of programming languages and see how each has certain advantages and disadvantages, and learning how to use different programming languages together (at least they used to teach this).

        In all the decades I’ve been at it, I have experienced almost spiritual highs in the process of programming. Few things compare favorably to the sensation of grasping an entire system, creating it, making it all work, and seeing it help people. But at the same time, I’ve found myself increasingly isolated, completely unable to communicate what it is that I find so terribly consuming.

        Huh. I would’ve thought the people who got into it for the money would be long gone by now. IT hasn’t been the best place to be in terms of worker morale and pay.

        Interestingly the only programmers I’ve met who for the most part see the programming world as “colorless”, as you put it, are female. I haven’t met many female programmers, and the few I’ve talked to have other interests they’re more passionate about. I’ve asked them if they’ve written programs on their own time. Without fail they’ve said no. In fact they think I’m strange for asking the question, like, “Why would I want to do THAT?” One lady I asked thought of programming like accounting, just like you said. So I could understand why she wouldn’t see it as a recreational activity. This isn’t to say they were necessarily bad programmers or unskilled. They just didn’t have the passion for it. Please understand I’m not trying to cast aspersions on all female programmers. I wish there were more of them in our profession. This has just been my experience.

        For years I’ve tried to understand why programming – which seems to me to be shot through with all the colors of the rainbow – can seem so colorless to the world at large. Maybe J.Ja is barking up the same tree?

        I think you are both barking up the same tree. He’s describing the frustration he’s encountered from dealing with people who are programmers but care only enough to meet the minimum standards necessary to not get fired. They don’t have any ambition to become better programmers. To them it’s just a job. They don’t care about the craft.

        Just recently I read some articles by Joel Spolsky, Paul Graham, and others who complain about something similar to what you and Justin have talked about. They take a slightly different tact. They put the blame squarely on the Java programming language. They claim that Java enabled schools to spring up that teach people just enough to get something done, but otherwise they produce programmers who are unenlightened and messy. They blame Java, because they claim it handles all the messy stuff that programmers used to have to worry about, like memory management, but it leads to carelessness. Come to think of it, I don’t understand why they don’t direct their ire at Visual Basic as well. It shares some of these qualities with Java, not to mention .Net. On the upside for corporations, they said, Java enabled a lot more programmers to be produced. During the ’90s this was seen as a boon, since companies complained of a shortage (they still do). The result though was a lot of second-rate software, failed projects, and failed companies. They also emphasize the virtues that are taught in the older computer science curriculum. One of them claimed that Java has kind of corrupted computer science at universities, because it de-emphasizes some of the old virtues.

        If you’re interested, the web page that directed me to these articles is at: “Why Java is not my favorite programming language” http://web.onetel.net.uk/~hibou/Why%20Java%20is%20Not%20My%20Favourite%20Programming%20Language.html

        Interestingly, several of these authors had good things to say about Lisp, especially Paul Graham, who is a Lisp advocate. Graham’s story of his start-up where he used Lisp extensively years ago, called ViaWeb (now Yahoo! Store) is compelling.

        I programmed in Lisp a bit in college, but it was so alien to me, plus I had a terrible teacher for the class, and the materials I was given to learn with were not good. It turned me off to Lisp. But these guys convinced me to give it a second chance. I’ve taken up learning it again recently. The selling points for me were that Lisp has been, and continues to be used in some real world projects, and not just for AI. Secondly, at least one of the folks said that Lisp is a good language to learn, because it teaches you to be a better programmer, even if you never end up using it in a real project, like what Justin said about exposure to mathematics. I’ve read a few complaints that Lisp does not have robust tool and library support. Plus there are not that many programmers who know Lisp. They either don’t know about it or avoid it, probably because it’s so different from the dominant languages out there.

      • #3159693

        How to Think Like a Programmer: Part 1

        by justin james ·

        In reply to How to Think Like a Programmer: Part 1

        Mark –

        You summarized my thoughts perfectly, and you are spot on with where I was going with all of this. I am currently working on another post regarding the emotional involvement with coding.

        Joel Splosky and Paul Graham are both two excellent people to be reading. In my most recent blog, I used one of Joel’s recent postings to soften my anti-AJAX stance a bit. He reminded me that the users do not really care what is happening, as long as it works well for him.

        I never had any direct Lisp experience, but EdScheme is a version of Scheme, which is a Lisp derivitive. Mr. Graham’s writings have made me very curious about Lisp, and I would like to know more about it. Sadly, the work I get paid to do during the day is just not something that I would ever get to use Lisp with (I spent more of my programming time in VB.Net or VBA, bleh), and my personal time is extremely compressed at the moment. I am taking the chance to experiement a bit with F# (Microsoft’s version of O’Caml), since it is easy for me to disguise “learning something new” as “learning to do things better” if it is occuring within a Microsoft environment.

        To all of the people commenting on this blog, thanks for the great discussion! This is the smartest and best discussion I have seen on this site since the “Could AI Ever Happen?” discussion.

        J.Ja

      • #3161239

        How to Think Like a Programmer: Part 1

        by staticonthewire ·

        In reply to How to Think Like a Programmer: Part 1

        Mark, I appreciate the trouble you took to summarize J.Ja’s post. To clarify, I wasn’t having trouble understanding J.Ja’s content (although your summary was very comprehensive). I was having trouble understanding the why behind the content – why he thought his quest for “what goes into a top-flight programmer” might bear significant fruit.

        As I emphasized (I believe in my first and second posts) there are
        few, if any, comprehensively useful programmer characteristics that I
        can identify. Even the efficiency and elegance that J.Ja admires can be
        a liability in a large shop, where individual elegance and efficiency
        can explode an operations’ overall efficiency. I addressed the futility of efficiency in this context with the example of the cobbler, if you recall.

        So my puzzlement with J.Ja’s post was not with the content – Justin
        writes with concision and cogency and puts things in ways that makes
        them easy to understand. Your summary was an equally clear distillation
        of his presentation. But neither you or he managed to address my
        puzzlement – trying to define something like this seems impossible, so
        why…?.

        The futility of developing a comprehensive “list” of good programmer
        characteristics or skills or activities led me to pondering the
        underlying issue, which to me brings us to the whole business of
        passion, of loving what you do, and personally, I think that’s about as
        far as it can realistically be taken.

        For the record, my own tendencies are very much like J.Ja’s – I’ve
        worked with many languages, some by fiat but most by choice, and have a
        strong background in basic science and math and would heartily endorse
        most of his recommendations. But none of them will mean anything
        without a deep desire to explore this art of ours (maybe someday it
        will evolve into a science…).

        “Huh. I would’ve thought the people who got into it for the money
        would be long gone by now. IT hasn’t been the best place to be in terms
        of worker morale and pay.”

        Yeah, well, I didn’t say they were RIGHT. I just said that was their motivation…

        BTW, I know many woman programmers, and IMHO the range of excellence is
        about the same as in men – 10% clueless, 80% “shake-n’bakers”, 10%
        genuine coin / the real deal / exploratory programmers. I’ve pulled an
        infinite number of all-nighters (no really – I counted) with very
        capable coders, women and men , and I can’t say I’ve seen any
        difference in competence or dedication.

        I do notice that there are variations in emphasis and focus, but no
        more than I would expect from critters as different as men and women.
        I’ve noticed the same sort of variation between Russian programmers,
        say, and programmers from the US or Japan. Cultural artifacts affect
        how problems are approached, but the random walk we all perform seems
        to bring us all to the same sorts of solutions…

        “I think you are both barking up the same tree. He’s describing the
        frustration he’s encountered from dealing with people who are
        programmers but care only enough to meet the minimum standards
        necessary to not get fired. They don’t have any ambition to become
        better programmers. To them it’s just a job. They don’t care about the
        craft.”

        Oh, I share his frustration, believe me. There is nothing worse than
        starting a project full of fire and realizing by the end of the first
        week that the team with whom you’re saddled consists of three
        shake-n-bakers, a glorified word processor, two interns who just
        finished their first “computer” course in community college and one
        brilliant but completely warped programmer who frames all his function
        calls in the form of koans that no one else can understand and is
        deeply involved in completing a sculpture on his desk made of the
        carefully preserved bodies of cockroaches he’s trapped under his desk…

        Whatever Spolsky & Graham et al may say, Java is not the root of
        all evil, nor is VB, or .NET or anything else. I use all these
        technologies competently and if I do say so myself, with real flair.
        I love Java, and VB and javascript too – hell, I still have a soft spot
        for Algol… the problem isn’t with the tech, the problem is with the
        people, as always. If we ever get to the point where we have a
        foolproof language we’ll find that it has a terribly limited array of
        solutions it can present. If you don’t have enough rope to hang
        yourself, you don’t have enough to save your ass, either. Or even solve
        your programming problem…

        There are always people grumbling about how easy kids have it these
        days, they don’t have to pay attention like we did in the good old
        days. Crap. There were just as many shake-n-bakers back in the good old
        days, FUBARing everything in sight. The 10% / 80% / 10% spread seems to
        be holding, if you ask me.

        I always liked Lisp as a toy or academic language, or for the
        individual programmer or boutique project but I think it’s a language
        meant for the dedicated programmer. I shudder at the thought of a big
        corporate shop filled with Lisp coders…

        If you like Lisp you might
        want to get a copy of Scheme, too, as J.Ja suggests I believe I have
        source for an old version of Lisp and Scheme, if you’re curious – or at
        least the BNF that you could LEX/YACC into something workable…

        As I mentioned in my last post, I was drawn to J.Ja’s blog entry because it resonates with a
        my own parallel quest, and perhaps I’ve led the subject too far away
        from his original intent to be a useful contributor here. But Justin
        has provoked this sort of resopnse from me before, in the “Could AI
        ever Happen” discussion he mentioned, where the two of us, along with
        several others, managed to carry a fairly simple question into some
        very deep waters.

        So maybe it’s a personality thing… 

        In any case, it’s been a pleasure reading everyone’s thoughts.

      • #3146356

        How to Think Like a Programmer: Part 1

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 1

        To staticonthewire:

        I got a good laugh out of your description of some typical programming teams you’ve dealt with. Wow. A programmer who wrote his function calls in koans and was building a sculpture out of dead cockroaches from under his desk! What an image! LOL!!!! 😀 I assume you said some of that in gest. Somehow, as I read that I felt like I could relate a little bit, like I was the guy who was doing that at one point in my life. I have since been reformed :). When I got out of college I was a bit full of myself in terms of technique. I remember when I got into one of my first jobs, working on a script interpreter written in C, I took one look at the code and said to myself “Oh this will never do.” The program had it’s bad qualities (there was a lot of cutting and pasting of code that went into it). It worked quite well for what it did. It was difficult to modify, however. Overall I wouldn’t say the program was terrible. I eventually convinced my boss to let me refactor the code (I use the term loosely). Before doing this I had read a book called “Code Complete”, and I took a couple of its suggestions to heart: always consider the “else” case in an “if” condition, and always put the most common expected action first in an “if-else” statement (like “if (<commonly expected condition>) {<commonly expected action>} else {<rare, unexpected action/error condition>}).

        This led me to write if-else’s like this sometimes:

        if (<commonly expected condition>) {} else {<error action>}

        It was my way of testing for an exception condition (such a situation would be an exception in C++, for example), in a case where if the condition was true, everything was fine, so do nothing. I used this form often. Eventually my boss ragged on me for writing code this way though.

        As I progressed through the code I fell into the “mission creep” trap. I found more and more things wrong with it. Instead of just improving it a little, like consolidating the cut-and-paste code into functions, I insisted on rewriting the parser into lex/yacc code. Originally it used gets() and a tokenizer function for parsing. Further, I was dying for a way to make the pointers more manageable. There were tons of them, very complicated to deal with. Every time I made a change to the program, it would crash the computer because of some errant null pointer (it was running under straight MS-DOS). Instead of taking me a couple weeks like I thought, I ended up spending something like a couple months on it. Once I was done I was so proud of myself. IMO the code was a thing of beauty to behold. In order to do this, however, I had to take out some language features. Best I could tell, yacc couldn’t deal with a couple of the language structures that used to be there, so I convinced my boss to let me take them out.

        There were a couple problems though. The new program required so much memory that I ended up compiling it using a DOS-extender. This meant that normal debuggers (like Borland C++ for DOS) would not work. I ended up using some other tool that could understand the DOS-extended code, but it was more difficult to use. The other problem came when our salesperson wanted me to create a software package, which included my program, which was a report generator (which included the script interpreter) for a demo he was going to do. It turned out he needed some of those language features I thought I could take out. So I ended up giving him a version that was several months old, from a time before I did all of my meticulous work. What I realized then and there was that all I had done was satisfy myself. My thing of beauty was of no use to some of the people who actually needed to use it. That was a huge lesson for me. A few months later we ended up dropping the entire thing. We had one main customer we were developing this highly customized system for, of which the report generator I had been working on was a part, and they ultimately came to the conclusion that it didn’t serve them. So the report generator was dropped entirely. We ended up going with a completely different solution based on de facto standards: Windows 3.11 and a relational database. The reason we needed the in-house-created report generator at all is the company was trying to market a custom designed, data-driven application authoring system, which used a non-relational database. Anyway, the main customer, and subsequent customers, really liked the Windows 3.11 solution. So the company went with that. All the changes I had worked on for the script interpreter were for naught. No one ended up using it. It wasn’t entirely my fault. Part of it, as I would later find out, was the fault of management, who despite the protestations of the senior engineer in charge, insisted on this highly custom system being developed in the first place. It showed me that I had missed the forest for the trees, in my part of it. I don’t think I could’ve done much better at the time, given my immaturity, and that I had difficulty understanding what the full product was until I had been working there for about a year. I assumed the project was very important and would get used.

        Anyway, I wanted to address something you said earlier, that you felt isolated and have had difficulty communicating your passion for programming to people who don’t seem to get it. I don’t have a lot of answers for you in solving that problem. It kind of sounds to me like when you’ve been teaching students, you’re dealing with adults. Perhaps you might want to start with kids. Get ’em while they’re young, in other words. When I was in high school, I had the opportunity to teach kids programming, in BASIC, in elementary school. What I found was there is a certain age when they are “ready” for the material. I didn’t do it for very long, but I found that the “magic” age when they can start grasping programming concepts easily is 10-12 years old. I also had the opportunity to work with younger kids, and they had difficulty with the concept. I can say from my own experience, I started learning programming when I was about 11 or 12.

        At the younger ages, they’re still interested in fun things, applying their creativity to what they’re doing. I would think you would get an opportunity to nurture that in the realm of programming, to get them to use their creativity, and have those AH-HAH!! moments, that sense of victory from having conquered a hard problem and to see the fruits of their labor play out onscreen just as they had imagined it. I’ve seen teenagers in programming courses who didn’t really get the excitement. They tended to drop out of the class though.

        I would think working with older folks (adults), that they have more practical concerns that dominate their thinking: getting to the goal (getting the project done), and making enough money to support their families and their standard of living. To them, for the most part, having those AH-HAH!! moments probably isn’t that valuable. Just a suggestion.

      • #3156629

        How to Think Like a Programmer: Part 1

        by wmarkhall ·

        In reply to How to Think Like a Programmer: Part 1

        Hello, and thank you for the opportunity to reply to your topic.I won’t waste your time with wordy circular opinion or grist for an echo chamber; however, I completed a degree in IST six years ago (in my late 30s), and in that experience I took home one salient point re programming: those proficient in English were, in the opinion of every professor of programming I had occasion to ask, the better programmers by a significant margin. Programmers lacking English skill, among other things, tended to exhibit more slovenliness in their code, leading to less than eloquent or interesting solutions. This sampling, of course, implies college level math of at least a passing grade.There are myriad opinions on this topic, and the mere observation by a handful of programming professors that English skill equated to programming skill is by no means scientific —  it is surely anecdotal. Nonetheless, I found this to be true in most — but not all — cases myself when examining the final projects in each of the classes I attended. The students that could write well could code interesting solutions as well as create compelling programs.  I’ve since read a few essays that espouse the same viewpoint as expressed by my professors, as I’m sure a quick Google will reveal for those interested.Thank you again for a thought provoking topic on which to opine.Peace,-WMH

    • #3151971

      How to Think Like a Programmer: Part 2

      by justin james ·

      In reply to Critical Thinking

      This is the second of a three part series regarding how to think like a programmer.

      In my last blog (How to Think Like a Programmer: Part 1), I discussed how education, particularly a strong background in mathematics, is crucial to being a great progrommer. In this post, I will go over some of the things that I have found extremely helpful in my personal development.

      The number one most useful experience in my learning process was EdScheme. Never heard of it? Sounds vaguely familiar? Well, it might sound familiar because of the word “Scheme,” a language in the Lisp family. You probably have never heard of it because it is pretty much useless. That is the point of EdScheme. The EdScheme language has only a few built-in functions (less than twenty, if memory serves), and no libraries that I am aware of. Working with the EdScheme books, you slowly build a much more full featured language out of the meager tools you are given. In the process, you are taught a lot about things like recursion, efficiency, and the idea that there is usually more than one way to skin a cat. One example that still sticks out in my mind was building a simple string reversal function. My memory of this is all very hazy, but I seem to remember that we did not even have strings at first; just arrays, chars, integers, and a floating point type. As we progressed through the course, we kept building on our previous projects. Problems and inefficiencies which were not found in the early work were soon uncovered as we progressed through the course.

      What this experience taught me was how to break a problem down to its most simple form, an Occam’s razor of programmatic thought. Ever since then, I have been much better at looking at a problem and ask myself, “what is the simplest way of making this work?” All too often, programmers find solutions that are incredibly inelegant, simply because it uses methods and tools that they are already familiar with. Programmers should not code for the sake of writing many lines of code, they do not get paid by the subroutine ala Charles Dickens. They get paid to create solutions that meet the why of the users. Anything more than that is simply hubris. Developers are not their own customers. The EdScheme experience showed me that achieving workable, maintainable, and excellent solutions was paramount; the pride of kludging impossible solutions out of inadequate parts never enters the equation.

      Another experience which has helped me expand my awareness of good programming is the Perl language. Perl is the most elegant language I have ever used. There is something very satisfying about writing one line of code that does what another language might take five, or ten lines to accomplish. Some time ago, I re-wrote the Perl SoundEx CPAN module as a VB.Net class. The differences in how the code was written were astounding. I was not just that Perl has regex’s built in as a string operator and VB.Net requires a regex object. It was the entire attitude of the two languages. Writing in VB.Net is like talking to a four year old; programming in Perl is like talking to an adult. Yes, Perl code can quickly devolve into an unreadable mess, especially by overuse of the implicit operators. And yes, Perl has quite a number of contradictions and inconsistencies. But the Perl language is so incredibly instructive and powerful. In 2000 or so, I wrote a piece of shopping cart software. In 1,609 lines of code (including whitespace, comments, etc.) I was able to pack in a flat file database system (this was before every web host came with MySQL), a full templating system that I consider to be better and more powerful than PHP’s (this was before every web host came with PHP), the actual shopping cart system, session handlers (you could not count on the CGI module being installed), error logging, the whole nine yards. It even had a module system for rapidly constructing plugins. Admittedly, it lacked an online administration system; I also believe that it was simply enough to use to not need one. Like EdScheme, I found myself writing my own, custom tailored versions of code in order to handle things which most programmers take for granted, because one design specification was that the only requirements were FTP access and Perl 5.

      To get a deeper appreciation for well written code, I highly suggest that you find an open source project that is not a “big player” and try modifying it and customizing it for your own use. Recently, I was working with ZenCart. Out of the box, the software itself is fine. But my customer had extremely demanding needs, and “out of the box” was not nearly enough. I had to go extensively into the code, to the point of modifying its base functionality. I was horrified. The code looked like the worst examples from Computer Science 101. There were nested loops with iterations like “m,” “n,” and “o.” There was little consistency in variable naming. And so on and so on. The code itself was fine on a functional level. But it was so unreadable that sometimes even minor changes would take hours just to find the right place to make the code change. That reinforced in my mind the need to develop a programming style and stick to it. I have a different style for every language I use. But I stay consistent within a language, to the point where I can copy and paste code from one project to another with little (usually no) modification needed. Small things, like always naming my iterators the same thing make a world of difference; someone who sees what a particular variable does in one function immediately knows what it will be doing in a completely separate function in another part of the code, even another program that I wrote. It is especially important for teams of programmers to develop a common style and stick to it.

      Finally, I recommend that you get your hands on a variety of different languages, and try writing the same small project in each one. You will quickly learn what works and what does not, for each language. Most programs, at the end of the day, are essentially CRUD mechanisms; get data from somewhere, present it somewhere, allow some changes to be made, and put the changes back into the data source. Some languages are better at this that others. The .Net languages, for example, are outstandingly pleasant to use for straight database-to-screen-and-back functionality. But try to do anything with that data programmatically, and the serious shortcomings of Visual Basic and C# become quite clear. They are weak languages, incapable of doing too much without the giant .Net Framework behind them. Java is about the same. Perl is an incredibly strong language for performing processing, particularly of text, but its systems for getting data to and from the user leaves much to be desired. Its roots as a procedural language make it less than fun to develop user interfaces with it on the backend. And so on. As you learn more languages, you also have the opportunity to see how things can be done. The more tools in your toolbox, the more likely you will be to use the right one for the task.

      I know that there is a lot more to be added to this list. Really what I want to make clear is this: always expand your borders, never cease learning, and do not get too attached to any one language or methodology. Learn what works in what situations, and you will be able to meet any challenge.

      J.Ja

      This is the second of a three part series regarding how to think like a programmer. Part one discussed the reason why mathematics skills are important to the code writing process. In the third part, I will cover the difference between shot term expediency and long term solutions.

      • #3153370

        How to Think Like a Programmer: Part 2

        by rhomp20029 ·

        In reply to How to Think Like a Programmer: Part 2

        Insert comment text hereWhile I agree with a lot of what you are saying I do feel that you have left out the most important part of the process.  Anything you program has at some point got to have a human interface.  Unless you can understand what humans will do with the code you have programmed and taken that enough into account to see to it that you forestall their screwing up the process, then no matter how wonderful your solution is it will be meaningless.  

        I had this problem years ago.  My boss got a contract to implement a system that was designed by someone else who had theoretically interviewed the clients and got their signoff on the whole design.  My job was to program what he designed, a task that i did and finished ahead of schedule.  I showed it to the IT person at the client and she was fine with it, thought it did exactly what it was supposed to do.  Then we took it to the user to show them how it worked and get their signoff.  Turns out that the designer had totally misinterpreted her needs and what he designed was totally out of the ballpark for what was needed.  I ended up having to not only redesing everything he did but then I also then had to reprogram everything I had done.  The whole problem was because the end user was not included in the process all along the way and that is the problem I have with your description of how to be a good programmer.

        I think that the programmer needs to be able to write good code that does the job.  I also think that the programmer needs to be able to visualize the whole problem from the micro to the maxi and also be enough of a psychologist to see what the users can do and will do and preclude their screwing up the solution.  He also needs to be able to write good, efficient code that allows a natural progression of the process for the user or the user will misuse the soltuion and screw up the files.  That is the part that I see missing and it is a part that I see missing in alot of the work that I saw being produced in my 40+ years in the field.  All too often the programmer was very proficient at mathematics and science and totally out of the loop on how people interact and work.  You need both parts of this equation to make the solution not only efficient but useful, usable and people-friendly.

      • #3153351

        How to Think Like a Programmer: Part 2

        by justin james ·

        In reply to How to Think Like a Programmer: Part 2

        Rhomp –

        Those are excellent points, why I have gone over extensively already. Check out my posts from April 2006 and March 2006. Unfortunately, TechRepublic does not seem to offer a permalink to the idividual articles, so I cannot put a “quick picks” here for you. This particular series focuses on the mental process of writing code in and of itself. The earlier writing were about the interactions between the developer and the user.

        J.Ja

      • #3152657

        How to Think Like a Programmer: Part 2

        by srarden ·

        In reply to How to Think Like a Programmer: Part 2

        I have to say that as I read your posts, I sometimes find myself infuriated as I think to myself ‘How dare he pick on the way I code!  I work hard I’m smart! I try!!!!’

        By the time I am done reading a post, I have gotten more material to REALLY think about than I do at all but a couple of other sources.

        I will never be at the level of programming that your posts make it clear programmers should be, but I will be closer to it, if for no other reason than you make me examine why I take the easy way out and code without thinking first.

        So, you tick me off.  And thank you! 🙂

      • #3153999

        How to Think Like a Programmer: Part 2

        by jamie ·

        In reply to How to Think Like a Programmer: Part 2

        I can’t agree with you more. I have been in situations where management has allowed programmers to use something that is new and bleeding edge just because it would be cool rather than use older technology which was more suited to the job. I program in VB, BASIC, FORTAN, C/C++/C#/Java/X++, PL/SQL, TSQL and its brilliant to be able to say I have a problem that a particular language will solve neatly, rather than coercing another language into doing something that it wasn’t meant to do. Lets face most programmers worth their salt can emulate features found in other programming languages, but why reinvent the wheel.

      • #3153985

        How to Think Like a Programmer: Part 2

        by problemsolversolutionseeker ·

        In reply to How to Think Like a Programmer: Part 2

        Have you ever noticed that articles about good programmers take an unnannounced left turn into conversations about preferred languages?

        A more proper article should be ‘What makes a good solution provider for any given set of tools?’.

      • #3154119

        How to Think Like a Programmer: Part 2

        by rclark2 ·

        In reply to How to Think Like a Programmer: Part 2

        My education was through the Air Force tech schools, though I have picked up college courses along the way when I needed a particular set of skills. I have been a programmer over 25 years, and agree with you totally. I didn’t jump on the band wagon when VB6 came out, so I am falling behind on all this OOP stuff. I realize I have to learn it, but right now, most of the problems I’m coming up against could have been solved with a 286 and qbasic if that platform had been web enabled.

        Your point on mathematics is well taken. I have trained dozens of programmers over the years, and most of them have succeeded or failed primarily based on logical thinking skills. I can give them the toolkits to program, and most can function, but to really become a programmer, a systems analyst, or a development architect, you have to have that thought process.

        The Air Force uses a cookie cutter approach to making programmers. They get a lot of cookie bakers, and not many chefs. Their basic formula is teach programming primitives in a pseudo code language. Then throw half a dozen languages at the students, showing how each primitive is used in each of those languages. The languages didn’t matter, the idea was to show that each language had methods to implement each of the primitives, and that each primitive solved some type of processing problem.

        There has been a lot of books, code, and articles written on the philosophy of programming. Most of it would make good fire starting material. At it’s core, what we do is solve problems using tools. Without the problem solving ability, no tool is useful. With problem solving ability, even without tools, you can solve the problem. Absence of the right tools just means a little larger problem, longer project, or more complicated solution. I hear the “Don’t reinvent the Wheel” over and over again. The problem is, we don’t have any “Wheels” out there to program with. When is the last time a wheel was completely redesigned from the ground up because it didn’t fit the latest philosophy? Wheels are wheels because they work. Languages are dynamic and should be. But we shouldn’t have to reinvent the whole language every three years to solve the same problems we have been working with for the last 20. When paradigms change, so do the tools used to interact with them. The PC was one, the web is another. What will the next be? Who knows. Maybe wands. Maybe implantable processors. Maybe something so strange that we wouldn’t recognize it today.

         

      • #3141900

        How to Think Like a Programmer: Part 2

        by munish k gupta ·

        In reply to How to Think Like a Programmer: Part 2

        I read thru your post. I agree for becoming a good programmer, one should have undergone the rigors you have mentioned. This gives you the understanding of how, why and what is happening.

        But, I believe the market has moved to a point where anybody who can drag and drop controls over a form is calling himself/herself a programmer. Is the use of IDE’s made life easier ? Yes, to a point they have taken out of the pain out of the job. When you meet these people and they are not able to comprehend or appreciate the code or the design, people tend to get upset. 

        The analogy to the same is the car making. Robots dish out cars in the assembly line. All looking same and doing the same function. But that does not mean, the market for custom cars or niche cars is over. The car makers making custom cars/niche cars command a premium for thier work. So, over the time, the tools(software) will become more intelligent to the point that they can generate the code themselves. All kind of applications that offer no more intelligence then what can be programmer will write2, will be rolled out by the tools. The market for the drag and drop programmers will vanish. But the market for custom, hand written software will still exist. The real programmers will command a premium and they will be able to carve out a niche for themselves.

        How long the intelligent tools will take to appear? My guess, once the cost of the programmers goes beyond a certain level, the cost of creating such tools will become justified. Just like the oil vs the bio fuels.

    • #3153910

      How to Think Like a Programmer: Part 3

      by justin james ·

      In reply to Critical Thinking

      This is the third of a three part series regarding how to think like a programmer.

      One of the biggest mistakes that I continually see in the world of technology is the problem of short term expediency versus long term solutions. All too often, a programmer takes shortcuts based on what is easiest, what tools are in his toolbox, or what technologies are currently popular. These kinds of compromises lead to systems that are kludged together becoming legacy code that is nearly impossible to maintain, inefficient, and slow.

      Recently on TechRepublic, there was a great discussion on whether or not a developer should try to anticipate their users’ needs. Part of that discussion came around to the idea of just how generic the code should be, which really balances the trade off between initial development time and long term flexibility. For example, in an OOP language, the programmer can implement every piece of code as a class, ignoring the chance that in the future their might be a need for that class to have been an interface. On the other hand, to write everything as an interface and only have one class implement it adds and extraordinary amount of time to the development process. It is indeed a fine line.

      One example of where this tradeoff never should be made is in the readability of the code. I do not care what kind of deadline you are under, there is simply no excuse for naming variables “a,” “b,” and “c,” the extra bytes will not break your budget. In fact, for every half second saved with writing the code, you are probably adding three minutes to the debugging time.

      Another shortcut that causes big problems down the road is the inefficient usage of libraries. It is all too tempting to load up a million libraries for just one, easily replicated function. The end result is an application that does not scale well. Take that little Web application you wrote, the one with a million dependencies, and look at its memory usage. “But it is only using a few hundred kilobytes!” you say. Multiply that by what happens if your application suddenly gets very popular, or used in a large corporate environment. Multiply “only using a few hundred kilobytes” by 70,000 simultaneous users, and all of a sudden your systems administrator is banging on your door and using words like “refactor the code.” If you are loading a whole library for just one small call or two, ask yourself if that is something you really need, or could possibly re-write within your existing code. That is one nice thing about open source, you can take what you need and leave the rest behind.

      XML (and similar technologies) are another pitfall. I know that I keep harping and nagging about XML, but it bear repeating since it seems like everyone likes it so much. XML is easy to use, there are plenty of libraries and built in functions in most major languages now to handle it. But it is ridiculously inefficient, both as a transport mechanism, and in terms of systems resource usage. XML manages to get the worst of all worlds; it involves a zillion more bytes worth of delimiters than a standard flat file, and uses a tree structure that is CPU intensive (to say the least!) to parse. If you find yourself reaching for the XML scheme for tabular data, think again. Take the twenty minutes to write and test a CSV creator on the data end and a CSV parser on the client end. Not only will you save yourself the overhead of those XML functions, but the size of the data will be significantly smaller, and the parsing will be faster by an exponential amount. Even better yet, if you have the same language on each end, serialize your data structures and pass them around. It may be bigger than a flat file, but it will be even lighter on the CPU. Microsoft learned this lesson in the Live applications; they switched from XML to JSON and the speed went up dramatically.

      It has been my experience that debugging and testing often take up at least 25% of the programming process, and frequently extend to 50%. Code is relatively trivial to write if you start with a solid knowledge of the language, the business goals, and some quality pseudo code (which you should be writing anyways to verify that the program meets the customers’ needs). Spending a few extra moments to reduce unnecessary lines of code, ensuring consistent variable naming, giving variables proper names, skipping implicit operators unless they are obviously being used, and so on goes a very long way towards making the debugging and code review process go as smoothly as possible. What is the use of saving twenty minutes coding if you increase your debugging and maintenance by 10%?

      Also learn about how your interpreter or compiler handles conditional statements and loops. It is all too easy to write one of these statements in a way that causes your application to run much slower than needed out of laziness or ignorance. For example, if the language you are using evaluates conditionals from right to left, put the conditions that are most likely to make or break the condition on the right. Why evaluate conditions that are not likely to make a difference? The same holds true for loops. Sure, it is easier to write something like: for (intRowCounter = 0; intRowCounter < tableDataSet.Rows.Count – 1; intRowCounter++) { but your users will be much, much happier if instead you assign tableDataSet.Rows.Count – 1 to a variable and check that variable. This eliminates the software from having to keep going down the object tree to find the count of the row object, and then subtracting from it. Little things like this add up very quickly, especially in a commonly run block of code.

      Programming with this type of thinking requires a wholesale shift away from the short term, towards long term thinking. I have listed only a few tips here, but I think you get the idea. Ignoring the long term consequences of your coding for short term expediency is the path to buggy and slow code that you will regret ever having written, and your users will wish you never wrote it.

      J.Ja

      This is the third of a three part series regarding how to think like a programmer. Part one discussed the reason why mathematics skills are important to the code writing process. Part two discussed some techniques for expanding your thinking skills to be a better programmer.

      • #3153882

        How to Think Like a Programmer: Part 3

        by jaqui ·

        In reply to How to Think Like a Programmer: Part 3

        XML= eXtensible Markup Language

        It is designed for use in laying out the look of a document to meed the needs of your application.

        using XSLT to alter the data in the document is leveraging XML to do something it was not designed for. I will use XML for layout purposes, as it was intended, but for manipulation of the data, better to use a different tool. Like using PERL to manipulate textual data, then XML to present the data. This uses both tools for designed purpose.

        AJAX is another tool that is being pushed right now, that really needs to be rethought by those pushing it.
        when broadband / low cost data transfer internet connections are really only widely available to 25% of the world, anything that sucks data transfer capacity just to maintain state needs to be shelved until the rest of the world has the cheap data transfer and broadband connection availability.

      • #3153175

        How to Think Like a Programmer: Part 3

        by ozpenguin ·

        In reply to How to Think Like a Programmer: Part 3

        > All too often, a programmer takes shortcuts based on …
        what their manager/client constrains them to.

        Don’t blame programmers on the fact that there is neither the time nor budget to do the job properly. It is not the programmers who are looking for shortcuts, it is the guy who has to put his hand in his pocket to pay for the work. This is always a short term solution, but people must work within the constraints the are given.

      • #3153156

        How to Think Like a Programmer: Part 3

        by pdq_rose ·

        In reply to How to Think Like a Programmer: Part 3

        I found myself smiling and agreeing with most everything you said in your article.  I think your series has alot of good points and it’s certainly good to have a large repertoire of languages and techniques on your plate to choose from.   I’ve found the key to being successful is being well balanced and knowing what to use and when.   For example, I’ve met alot of people who feel programming a database (ex: PL/SQL, Transact-SQL) is the only thing that really matters…everything else a waste of time.  So, I’ve been on projects where database was just given way too much importance.   In some enterprise systems, your database can be extremely important, but it is not always the only thing that matters.  Then you have those who want to rewrite the OS kernel in order to solve a simple business problem.  There has to be a good balance between what is needed and the best way to solve the problem.

         

      • #3153146

        How to Think Like a Programmer: Part 3

        by stevenschulman ·

        In reply to How to Think Like a Programmer: Part 3

        I do not understand the complaint that code may not resize well and that if all of a sudden the code will be used “by 70,000 simultaneous users” then refactoring will be necessary. If there is low demand then there is not problem with the quick-and-dirty version. If the program uncovered a previously unidentified demand then the programs probaby should AT LEAST be refactored. It probably should be rethought out completely.

      • #3153073

        How to Think Like a Programmer: Part 3

        by mindilator9 ·

        In reply to How to Think Like a Programmer: Part 3

        another aspect i think is missed is the awareness of other solutions to use. sure we’re either all rocket scientists who can code or we’re just subclass instances of the CollegeEducation object, but some people (myself to a degree) end up teaching themselves a language, or new things about their favorite language. and we all know we have to stay vigilant and keep up with the current technologies. does this mean we go back to college and get the complete digest on the new fads….every 4 weeks? most of us even with college educations or IQs that can’t be determined by an algorithm end up teaching ourselves the latest thing. this means almost all of us lack the proper guidance in a particular language. some programming shortcuts are only shortcuts in typing speed, while they end up using unnecessary amounts of memory or bandwidth. we as eager students find these typing shortcuts and revel in their efficiency, never knowing the underlying effects on the bits themselves. no one will deny that ajax is poorly documented, and that’s why all too often it is poorly implemented. my boss isn’t going to pay me for the time to learn to use a new technology correctly. half the time it is replaced or obscured by some new rising star just as soon as you figure it out. i wholeheartedly disagree with my boss’ inability to perceive the value; but he isn’t a programmer, he went to business school. to him it doesn’t make sense. to me it’s the only thing that makes sense. i absolutely believe that everything in this universe relates to everything else whether you can perceive it or not. being a good programmer, education aside, relies just as heavily upon having a good boss who understands the value of spending time now to achieve long term success. unfortunately the business world is scared poopless that it will collapse tomorrow so every focus is on the current quarter profits. doing things the right way makes no money today. sad but true, i wish it were different.
        but i do take your article to heart. i was under the gun to write a program that takes a 500 field application and map the data to some specialized commercial software, to shift the burden of data entry to the user. i was just out of school and so i wrote roughly 12,000 lines of code and, in between my other duties as tech support, the project took me about 9 months. after that i was able to take more time to learn the subtle do’s and don’ts and i recently went back to the app and pared it down to about 4000 lines of code. it also took me much less time to do so than to write the app the first time. this means if my boss had taken my initial advice and time estimate to plan more before development, i could conceivably have done it in 3 months. yet there was no way i could quantify that to him, he was uninitiated in the world of programming. he sees the lesson now, but it was a hard one to learn. bosses want to know the exact x amount of dollars, or close enough to it, that your precious time spent planning and debugging is going to save them. when those dollars are saved by lack of complaint calls to the support line, or by lack of being in a problem situation altogether, it is nearly impossible to impress that value on the man with the red pen. they are hypotheticals, and only programmers deal with that, it seems to me anyway.

      • #3153023

        How to Think Like a Programmer: Part 3

        by wayne m. ·

        In reply to How to Think Like a Programmer: Part 3

        Refactoring

        I suggest we take care to use the term “Refactoring” as was used by Martin Fowler in “Refactoring – Improving the Design of Existing Code.”  Refactoring is changing the readability and maintainability of code, it does not include adding functionality nor improving performance.

        It has been widely accepted that software performance enhancements should be treated as a separate practice and be done after implementing functionality has been completed (one should avoid premature optimization).  Refactoring is taking an identical approach to code clean-up.  Code clean-up needs to be recognized as a necessary step in producing code and not something that is done unofficially and informally.  Focus first on implementing the correct functionality; use a refactoring pass to correct variable and function names and decomposition.  Next, if ncessary, have a pass at performance enhancement (library usage would certainly fall into this category), but make sure this is done based on measured characteristics.  Follow the performance enhancement pass with yet another refactoring pass.

        It is a matter of focus.  Approach an effort as a four stage task – Functionality, Clarity, Performance, Clarity Again.  Do not try to do multiple things at once.  Expect to do a refactoring pass to improve the readability and design of the code, and then make sure to do it.

         

         

      • #3154290

        How to Think Like a Programmer: Part 3

        by xitx ·

        In reply to How to Think Like a Programmer: Part 3

        Achieving long time goals doesn’t conflict with the short term ones. To an experienced programmer, not matter it’s a long term solution or short term one, the design or the code structure is always the same: easy to extend and maintain. For a programme with less experience, spending tons of time doesn’t always result in a good design.

        Design is the King. Coding is trivial. Any codes should be in modules and easy to be taken out or put in. Like buiding a house, the structure is the frame and the particular functionalities are the bricks. Bricks can be taken out or added easily without affecting other parts. All are dependant on the good structure.

         

        Dennis

      • #3154209

        How to Think Like a Programmer: Part 3

        by deveshpradhan ·

        In reply to How to Think Like a Programmer: Part 3

        Excellent Article….agree with you on all points…I think most of the programmers do not relate them self with the user needs….if they can think of the other side(user’s view) then probably they can think and imlement it better. Hence coding is not only about mastering languages but mpore in applying it to the user needs…..

        Thanks a lot again for such a great article…Most of people know these points but they tend to forget/ignore it once they are into the coding phase.

        .

      • #3160723

        How to Think Like a Programmer: Part 3

        by mark miller ·

        In reply to How to Think Like a Programmer: Part 3

        Justin James said:

        Sure, it is easier to write something like: for (intRowCounter = 0; intRowCounter < tableDataSet.Rows.Count – 1; intRowCounter++) { but your users will be much, much happier if instead you assign tableDataSet.Rows.Count – 1 to a variable and check that variable. This eliminates the software from having to keep going down the object tree to find the count of the row object, and then subtracting from it.

        It depends on what language you are using. If you are using C# it’s my understanding that the compiler optimizes the loop test so that the expression “tableDataSet.Rows.Count – 1” is only evaluated once during the loop’s execution. Not all programmers like this, since they’d sometimes like the for construct to re-evaluate the test condition, since the value on the right side of the comparison operator (for the sake of argument), using a different expression, may change during the course of the loop. This behavior is not without precedent. Many years ago I worked with a dialect of BASIC that implemented this same sort of loop optimization.

      • #3146503

        How to Think Like a Programmer: Part 3

        by grantwparks ·

        In reply to How to Think Like a Programmer: Part 3

        I don’t understand your detour into XML bashing.  It’s antithematic to the rest of the article; XML’s malleable and self-describing nature are what make it a superior representation format for the structures required to enable data-driven applications.  And data-driven, rule-driven meta programs don’t require constant attention and rewriting as needs change.  Of course other, simpler formats are more appropriate for large datasets of predictable “rows” of data, but your CSV alternative is comparing apples to oranges when you complain about XML’s “resource usage”.  I would counter that if one were to load his entire CSV file into memory and then try to randomly navigate around it, using values in one row to access other rows (as I can do with ease in DOM using XPath) would result in a much more innefficient application.  Granted, you do make the distinction about not using it for tabular data deep down in the paragraph.  My gripe is that you should’ve made the point be “use data representations that are appropriate for the volume and type of access”.  I can think of many examples where people have wrongly employed database tables to contain metadata and rules to control an application – requiring lots of brittle code (and painstaking restructuring) whenever the underlying schema changed.  

        Of course it doesn’t help that MS seems to not have a clue what XML is all about, or at least they are not passing it on to their large base of devotees.  It is not simply a data representation.  Have a peek at SVG and RDF.  To duplicate what these technologies provide using traditional data mechanisms would be barely possible for most programmers.  See how easy it is to query a database for *tabular data*, returning the results in XML and transforming that with a simple XSLT into an interactive chart.  Sans coding.  The unfortunate reality is that few in our field today have enough breadth of experience to be able to call on enough tools.  When one’s pallette only has a couple colors, the results can be pretty bland.

        (Jacqui – pretty God-like of you to know what things were “intended for”; computers were intended to solve artillery trajectory calculations and calculate compound interest.  What are we doing on the Web? 😉

    • #3159979

      AJAX: The Right Goal, but Often the Wrong Tool

      by justin james ·

      In reply to Critical Thinking

      I think I am beginning to come to terms with AJAX. Frequent readers know how much I dislike it. Joel Splosky ("Joel On Software") wrote a good piece recently (FogBugz 4 1/2 and Subjective Well-Being) that has helped turn me around, but only slightly. Joel helped me see where my anti-AJAX crusade is unproductive. AJAX does have the right goal in mind, but I still think that it is a poor technique for accomplishing it.

      What Joel (rightly) points out is something that I harp on a lot, which is that the user experience is what it is all about. AJAX holds out the promise of much quicker response to user input, which is a prime factor in user satisfaction. AJAX also allows user interactivity that traditional Web applications do not provide, such as drag and drop and right click functionality.

      What Joel misses though is the technical issues with AJAX. As Roger Ramjet (frequent ZDNet commenter) says, "JavaScript is the COBOL of Web development." I would like to add to that, "AJAX is the Yugo of programming techniques." Sure, it can get you where you want to go, but you really will not enjoy the ride. In this case, the destination is a great user experience, and the ride is the development process.

      When you mix "the COBOL of Web development" (JavaScript) with "the US national budget of data transfer mechanisms," (XML), you get "the Yugo of programming techniques." AJAX combines a slow language that has many mildly different implementations with a bloated way of transferring data, and tries to display it on the screen with a system that still will not reliably display the way you want it (HTML). This just makes no sense to me.

      Many AJAX proponents that AJAX is the only way to achieve these goals. I believe that this is incorrect. Remember, the goal is to provide a great user experience. Does drag and drop or right click add much value to the user? Yes, but only for some users. The majority of users do not use drag and drop or right click unless they have been taught to do so in a particular application. Users are timid. They do not try things outside of the ordinary. So to add this functionality is an incredible amount of work for no gain for most users.

      So I am softening my stance a bit. If AJAX can be used in a way that the problems with JavaScript and XML do not hinder it (small chunks of data, basic JavaScript functionality) then it is a perfectly fine technique to use. But to try to replicate full rich client functionality just does not work. Microsoft has discovered this with the Live.com development. They reached very far, and had to drop the XML end of things in favor of JSON; XML parsing was taking far too long on the client side. AJAX can deliver usability benefits, but only when used appropriately and carefully, and the cost of development must be weighed against the expected utility.

      J.Ja

      • #3159710

        AJAX: The Right Goal, but Often the Wrong Tool

        by wayne m. ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        It must be my old age showing, but I after reading the FogBugz link, I was thinking, “AJAX rediscovers the Function Keys!”  Remember the pre-graphics days when almost every application had line across the bottom describing what the current function of F1 – F10 (F11, F12) was? 

        I think AJAX is a useful capability to allow customization at a user’s desktop, but I still feel that the current HTML browser desktop is insufficient to support interactive applications.  AJAX is currently a patch over that short coming and I would rather see effort expended in defining a full featured, session-based desktop.  AJAX could then be used to adapt the desktop instead of fully implementing it.

        And yes, bring back the function key row!

         

      • #3159604

        AJAX: The Right Goal, but Often the Wrong Tool

        by mulkers ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Ajax is scratching the surface of web 2.0. But I think Ajax is a first sign of users willing back a better ergonomy and richer GUIs. I think the A of Asynchronous is there to stay. But the Javascript in the browser does not make any sense.
        If the application gets downloaded over the wire, it needs to be written in portable code running in a virtual machine or something similar.
        My question is: and what do you consider as acceptable alternatives to Ajax?
        Java?
        Flash?

        Robin

      • #3159599

        AJAX: The Right Goal, but Often the Wrong Tool

        by info ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Why not discuss Flash as an alternative to AJAX-based applications? (hey, i like AJAX, only then i mean the football team from my hometown Amsterdam…). I build RIA’s (Rich Internet Applications) based on Flash and it works like a charm. Above all: one is NOT bound by the restrictions of HTML which in my opinion is unsuitable for applications. I feel that AJAX is a nice idea, but founded on the worng technology. Flash is much more suitable for webbased apps and with 98% of the users being able to view Flash content, it’s a pretty cool platform for building webapplications.

      • #3159566

        AJAX: The Right Goal, but Often the Wrong Tool

        by jarit1 ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I have been developing Ajax like applications for over 6 years and although it always seems like a great idea when I start the project, you have to take care that it doesn’t become a ‘golden hammer’. I have made these mistakes myself in the past where, in addition to using it in places where it really added value, Ajax was used everywhere, even on customer registration pages.

        When used in the right place, Ajax can offer a great experience. However, it is my experience that the concept of parts of the code and data living on the client and parts on the server goes way over the head of many ‘ordinary’ web developers.

        So my advice is to use it sparingly and only where there is a direct benefit for the customer. If you use it, use industry standard libraries such as Microsoft’s Atlas to ensure that new team members may not need to learn your proprietary libraries from scratch.

        I think what Microsoft is doing on http://www.live.com is not a good example of using Ajax. Sure it looks cool and is well developed from a technological point of view, but it also slows the loading of the pages and requires users to get used to yet another proprietary interface.

        Some of the libraries I have created over the years are discussed on my blog at http://www.dotnetjunkies.com/weblog/jritmeijer

      • #3159559

        AJAX: The Right Goal, but Often the Wrong Tool

        by just_chilin ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Ajax seems to be a solution to a problem that was solved a long time ago by Java applets. If only Sun had continued improving applets. I still think applets will soon be back in style, and who knows? maybe even Swing!

      • #3159552

        AJAX: The Right Goal, but Often the Wrong Tool

        by matthew22 ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I knew somesone would suggest Flash as an alternative — but there’s a BIG problem with Flash.

        You talk about proprietary and machiavellian! Microsoft is open-source by comparison. Adobe won’t even give out a command-line compiler to make .SWFs — say I was a poor college student or independent developer, and I wanted to develop ActiveScript Flash applications using a freeware text editor. I couldn’t! Now I could write in any other programming language — PHP, any of Microsoft’s .Net languages, Perl, C/C++, Java, Javascript, Pascal, you name it. Of course a fancy IDE is nice, but in many of these cases there is NO benefit to be gained by paying to become a developer!

        Even before Microsoft released the “Express editions” of their .Net development software, they offered a command-line compiler so people could at least develop for .Net if they were desperate and broke. But Adobe isn’t that kind to developers!

        So although 98% of end-users have Flash capability, I have serious doubts that Flash will ever be popular with developers. Nothing that costs $800 to join (I’m not joking or exaggerating!) can ever be a “must-have” for a developer’s resume.

        Just like VCRs were optional when they cost $1000, and CD-R drives were optional when they cost $400, I’m sure Flash will always be a niche player as long as they refuse to court developers in a serious manner. And a 30-day trial is not good enough.

         

        Matthew

         

      • #3159518

        AJAX: The Right Goal, but Often the Wrong Tool

        by gregfuller ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Matthew, there are open source compilers for flash. Also, Flex 2, which includes a very rich component set, will offer a command line compiler.  This will let you develop rich internet applications with no cash outlay

      • #3159506

        AJAX: The Right Goal, but Often the Wrong Tool

        by blj ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I disagree with what you are saying about AJAX. Your statement: AJAX is an expensive development for the users, where not many would know how to use drag and drop — is rather absurd. It depends on the type of application, what type of users and how you implement interactivity.

        I agree with Javascript is cumbersome, but only if don’t know how to use prototype.js and/or similar tools. XML should not be named as bloated either, blame the developer, architect whoever is responsible for it.

        In my application, I have AJAX with prototype.js and xml and it was added to an already working application. There is no change to the way it is used, and ofcourse it is quicker and much more usable. My work gracefully uses the old methods, if javascript is not available at the client.

        AJAX is a hack that makes life a little easier. There is nothing wrong in building a rich client with AJAX. You can use it and abuse it, for e.g. Gmail, Google Calendar are good uses and Yahoo mail abuses it. 

      • #3159417

        AJAX: The Right Goal, but Often the Wrong Tool

        by andy ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        The assertion that “the majority of users do not use drag and drop or right click” is absurd on the face of it. Virtually every user of a web browser on any computer is familiar with drag and drop and right clicking — it’s built in to the graphical user interfaces of operating systems from Linux to OSX to Windows. In fact, that is the whole point of adding such functionality to web applications: users are so familiar with drag and drop and fast responses to their actions from using desktop applications that they want to see it in their web applications as well.

        Another absurd statement is that JavaScript is the COBOL of web development. If JavaScript is so primitive, why has Adobe built it into Photoshop and Illustrator and Acrobat as the default scripting engine? Why did Microsoft build it into the very heart of the way that ASP.NET works in the browser (“javascript:doPostback()” is just one example)? Why has Google had such great success using it in Google Maps and Calendar and GMail etc.? In fact, if JavaScript is so lousy, why has AJAX become so popular that Justin James has had to “soften his stance” on it even a little bit? If AJAX wasn’t working, there wouldn’t be a discussion. AJAX works because of JavaScript. Time for J.Ja to check the reality meter.

      • #3161106

        AJAX: The Right Goal, but Often the Wrong Tool

        by compuguru ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Whenever you disagree with a solution, you should give an alternative solution.  What is your alternative to AJAX?

      • #3151581

        AJAX: The Right Goal, but Often the Wrong Tool

        by mattmutz ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I agree that AJAX is valuable in enhancing the user experience, if it’s used correctly–don’t just use it for the sake of using it. Some internet users are so accustomed to web apps behaving a certain way (round trips to refresh the whole page) and might get confused if a page isn’t behaving as expected because some developer wanted to showcase his mad AJAX skills.

        And yes, it is bigger and more complicated than it should have to be. There’s an alternative: combine Flash and XML to achieve AJAX-like functionality with far fewer lines of code, and a quicker, peppier response.

        It’s been done… check out free SDK and examples at http://www.fjax.net

        -Mutz

      • #3147934

        AJAX: The Right Goal, but Often the Wrong Tool

        by mattmutz ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        AJAX, when used intelligently, does help solve some of the usability issues we face when building for the web. It’s not perfect, but it’s worthwhile.

        Building cross-browser web apps with  traditional AJAX more complicated than it should be, but there is a fast and easy alternative: FJAX (www.fjax.net).

      • #3161175

        AJAX: The Right Goal, but Often the Wrong Tool

        by greg ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        FLASH IS THE WAY TO GO

        Javascript / DOM / HTML is so unreliable it is like comparing a 70’s Chevy (AJAX) with a 2k Hybrid (Flash Platform)

        I cant understand why people oooo and aaaaa over simple stuff like drag and drop when it takes a monstrosity of code and time to develop with Javascript (and will still end up never working on half the known man-mdae browsers), and can be done in about 30 seconds in Actionscript.

        Flash doesnt have this browser compatibility problem. Open source or not…Javascript blows ass.

      • #3146989

        AJAX: The Right Goal, but Often the Wrong Tool

        by matthew22 ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        What I would like to know —

        How many fans of Flash/Actionscript actually paid the $800 out of their own pocket — or did they get it from a wAreZ D00d?

        I, for one, will not steal software.

        Matthew

      • #3146843

        AJAX: The Right Goal, but Often the Wrong Tool

        by apotheon ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I’ve never seen such complete lack of understanding of the problems at hand in a discussion of web development from so many people as I’ve seen in the responses to this blog post. Wow, folks.

        First, for J.Ja: AJAX is sort of a misnomer. Don’t be fooled by the fact that it refers to “Asynchronous Javascript And XML” into believing that it’s a combination of Javascript and XML designed to create a rich user environment. Google was pretty much the first major implementor of AJAX and took issue with the term AJAX specifically because of the false impression the acronym created. The only reason XML is mentioned is the use of XmlHttpRequest() in AJAX — and even that is misnamed. The whole exercise of developing and using AJAX is centered around two things. The first is Javascript as a means of execution of the concept (and this might be replaced some day if some other equally effective, efficient, and acceptable client-side scripting language becomes widely available). The second, and more central to the concept, is data transfer asynchronous with presentation via HTTP. Asynchronous data transfer capability and view rendering is something desperately needed in high-end web design and for the future evolution of web development, and as things currently stand, our only options are Javascript, Java, and Flash. As much as I dislike Javascript, it’s by far a better option than Java, which in turn is by far a better option than Flash.

        For the Java pushers: I don’t need that kind of overhead on my system. I don’t want broken functionality from the “write once, run nowhere” language. Applets sound like a great idea — too bad they were done better by Flash, and weren’t even done well there. Anything that delivers an executable, inscrutable bundle of toy functionality to the client the way Java applets are delivered is a problem for performance, portability, and security, and as a result I don’t even have a Java plugin for my browser installed on this system. Keep your grubby little applets out of my webpages.

        For the Flash cultists: Flash is not accessible to those with disabilities. Flash is willfully designed to violate standards. Flash is proprietary and monopolistic. Flash is not fun to work with. Flash is resource-hungry, overused, slow to load, annoying, prone to resource leaks, and better suited to advertising than content. There is no Flash plugin installed on this computer either. Yes, there are open source Flash authoring tools available, and maybe some day they’ll solve the problem for us, but in the meantime Flash is Adobe, not Web. Anything that doesn’t degrade gracefully should never be used for web design, and Flash is the poster child for technologies that don’t degrade gracefully.

        By the way, that Fjax website doesn’t do crap without a Flash plugin. That’s bad web design.

      • #3146342

        AJAX: The Right Goal, but Often the Wrong Tool

        by greg ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I am with apoth on the java applets.

        However im not scared of the word proprietary. you mention it as a bad thing. but cite no negative example. Open source doesnt always mean better by no means. (open standards are a must however..much like ECMA script which actionscript and javascript are based off).

        as far as flash being annoying…well any sane person can argue whether that is technology or implementer…same goes with javascript and little mouse trailers that flicker in my eye balls.

        Flash is fun to work with and javascript makes me want to punch babies (too much dane cook).

        Many of the people that disable flash on their browsers are likely to as well jisable javscript for fear of malicious intent.

        I guess the proof is in the pudding..i have seen many successfull enterprise apps in Flash…and i would laugh if you dont want to reconsider your innuendo about javascript being more processor friendly and stable than flash…(tell me jaavascript never crashed a browser)…

      • #3146498

        AJAX: The Right Goal, but Often the Wrong Tool

        by Thorarinn ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Anything that requires the use of the mouse and ignores the keyboard is bad practise. Unfortunately there seem to be more and more “web applications” that don’t know what a keyboard is. I don’t care which (flashy) “technique” was used to create those, it’s crap. The most important factor in the user experience is “usability” – too many developers never seem to have learned anything about it. What’s worrying is that most of those “Web 2” apps seem to take it for granted that every user is able to use a mouse.

      • #3156910

        AJAX: The Right Goal, but Often the Wrong Tool

        by onefocus99 ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I was a Java Programmer but ran out of time to cont. I found that Flash is great and easy and quick in designing things and then you just add the code to make things work. Flash is on 99.5% of all users out there. To me, Keeping It Simple is important.
        Well, that’s my 2cents.

      • #3156887

        AJAX: The Right Goal, but Often the Wrong Tool

        by jmgarvin ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        AJAX has accessability issues too.  It really bothers me that this
        whole Web 2.0 is mostly not even thinking about ADA stuff, but rather
        gee wiz stuff that nobody will ever use (and will be a great attack
        vector).

        While I like the IDEA of AJAX/SOAP/et al, the implementation leaves something to be desired.

        Don’t forget there are blind and blind and deaf users too…

        / Working with a blind user now
        // Hates Gnopernicus
        /// Speakup isn’t bad
        //// brltty roxors your boxors!

      • #3156818

        AJAX: The Right Goal, but Often the Wrong Tool

        by mark miller ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I’ve only read about AJAX. Once I did I cringed. Someone would have to pay me a lot of money to create one of these apps. It’s nice that the option is there, but it seems like a hell of a lot of work to accomplish something that could be done so much more easily if the infrastructure was on the client. IMO that’s the problem. I’ve got to put the blame where it belongs too: with Microsoft. They’re the ones who invented the technology that enables AJAX. It’s been in IE since about 1997. The programmer has to go through a lot of effort to make it work and to implement some of the more basic features that are visible to the user. It’s too bad that customers are resistent to installing some sort of internet-enabled middleware, like JRE or the .Net Framework on the client. Personally I’d rather be working on a Flash client, if nothing else. At least that technology is designed for this sort of work. I’m looking forward to WPF/E, but I think it will be a while before it catches on. Customers are resistant to installing something that doesn’t already come with Windows, and I’m skeptical that the uptake of Vista will be dramatic.

      • #3156786

        AJAX: The Right Goal, but Often the Wrong Tool

        by justin james ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        jmgarvin –

        I have actually been working on a blog about just that – the near impossiblity of the use of AJAX sites for handicapped people, both the cognitively handicapped (such as the blind or vision impaired) and the physically handicapped (such as those who have problems with fine motor control). The biggest obstacle is getting my hands on some screen reader software. I want to show beyond a shadow of a doubt how AJAX applications leave a significant portion of the viewers behind.

        Thanks to everyone for their awesome feedback & comments!

        J.Ja

      • #3155142

        AJAX: The Right Goal, but Often the Wrong Tool

        by mindilator9 ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I take issue with the above comments concerning Flash. I do not get the impression from having read Matthew or apotheon’s comments (shame on you apotheon) that you know very much about Flash at all. Yes there are open source compilers. PHP even has the ming ext. for generating swf wrappers. Flash’s Remote Objects are far superior to AJAX, one reason being speed. And apotheon, I refuse to believe you’ve even bothered with the program when you say developing with it isn’t fun. Either that or you are ready for retirement.
        Adobe does not own the .swf extension. Read up, people. SWF does not stand for ShockWave Flash like you all probably think. It stands for Small Web File. Many other programs exist for creating Flash-like RIA’s, such as Flex and Swish. I prefer Flash though I admit my exposure to Flex is limited. Even more promising is the amazing tool known as AMFPHP. I suggest every serious PHP developer look into this integration of PHP and Flash.
        Every improvement Flash has made in its sequential versions has brought it closer and closer to a true OOP IDE that is truer, easier to use — and above all keep organized in your head — for making what you envision come to life. I defy anyone to name a technology that can reproduce the immersive experience of Flash. If you have complaints about what you’ve seen in Flash, you can 97% of the time blame the developer, not his choice of tools.
        I highly suggest the book by Friends of Ed, PHP5 for Flash, as well as the myriad other books on Rich Internet Application development in Flash.
        As far as I can tell, your complaints about Flash are deeply rooted in ignorance and unfounded bias. I challenge any of the users here to bring a valid, coherent argument against Flash development (“proprietary”, “platform”, or “not fun” arguments are neither), or the PHP/Flash model. Here, I’ll help you out. I’d like to see way more functions for operating on strings and arrays in Flash. Most of this work needs to be done on the PHP side. The SharedObjects and Remote Objects are wonderful additions to the development scene in Flash, as well as the central Macromedia server.
        To side with some of the dissenters on one issue, I will say that I also was not happy with Adobe’s acquisition of Macromedia. This may be the only thing I think has a chance of keeping Flash from being the best alternative for web development. Adobe only ever did their Creative Suite right, which honestly has more foundation in the print world. There are some great features for creating web graphics, but consider this: A file exported for the web into .png format and imported into Flash is still larger than that same exported file opened in Fireworks and copy/pasted into Flash. In fact, the technique I just described, when the image properties are set to Lossless PNG, absolutely smashes the file size like Mario on Goomba mushroom. I’ve got an app whose .swf files when exported using both methods and compared showed 1.5 meg (Adobe->Import) vs. 200K (Fireworks c/p). Back to knocking Adobe, the GoLive and InMotion programs are dismal failures, and Macromedia has seen fit to hand over its gold bullion to a company I feel is historically inept with web development applications.
        Ignorance of this tool is by far your greatest excuse for bashing it. Being an artist who became a programmer, I can only surmise that this is because the majority of coders have only logical skills (if that) and no creativity. You must be so intimidated by the responsibility for actual graphic design that you shun it for your dark closet of logic design. I plan to eat your lunch for dinner, and the Flash/PHP model is my barbeque pit.

      • #3158114

        AJAX: The Right Goal, but Often the Wrong Tool

        by duckboxxer ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        Admittedly, I am only mildly watchign this, but what is the status on Flash/Flex and SEO or Section 508 compliance?  Not that Javascript is any better, but just curious.  There’s a right tool for every job, and not every screwdriver fits every situation.

      • #3143818

        AJAX: The Right Goal, but Often the Wrong Tool

        by siryes ·

        In reply to AJAX: The Right Goal, but Often the Wrong Tool

        I have a mild preference for websites that DO NOT use Flash actually. It is a fine tool on its own, but it can be (and quite often is) misused. The first abuse problem of Flash are disturbing advertisement popups that interfere with normal web browsing experience – but on the contrary there are many examples of good use of Flash too. Similarly, the first problem with Java applets is that with the old JRE’s versions (1.2, 1.3 and 1.4) they loaded and run slowly – it has changed with the WebStart plugin, but who knows/cares about that today?

        But the worst problem with Flash-only sites is that they are not easily indexable, searchable and bookmarkable by outside mechanisms. Think <META> keywords, think web search (like Google), think Favourites/Bookmarks.

        From the user’s perspective of the hypothetical web shop all is fine until he/she wants to persist the current state of the application. Taking a bookmark to the selected product displayed on the screen for future reference seems impossible. Instead, only a link to the first “page”, or a link to the start of the application is recorded. One has to repeat the navigation or search using the Flash interface to get to the desired place. It’s like starting all the time from the root directory of your disk to reach the file that’s buried 9 directories deep, without any shortcuts. Not too good.

        Indexing a Flash-only site is impossible without a knowledge of its internal structure, and even then it can be hard. An .SWF file is in fact a binary blob, which cannot be easily parsed like HTML or XML. It’s impossible to extract links that leads to the other pages of the application – which is perfectly possible with the HTML. In general, self-contained applications (Flash, applets, .exe programs) pose a problem for us, because we trade a bit of our freedom for the convenience, speed and ease of use. A freedom to process the information the way we see it fit. I know, it depends on the person and desires, but in the end it’s the information that matters and not the nice GUI.

        The more web applications advance, the more we want to index their contents, search through the data or prices, and in general be in control of this ever growing amount of information. The switch from HTML to XHTML has been only a first step toward that goals. Adding AJAX to the mixture makes it easier/faster to use said web applications. But it’s the data that we care about, and the way that we can get to them fast if we need to. Currently the trend toward separating actual information (HTML/XHML/XML) from the presentation (CSS) and interaction (JavaScript) is only getting stronger.

        But, you see that for yourself !! Recently TechRepublic published a free sample chapter from the book about AJAX. Even though it’s loaded with technical details, it’s fun to read. You can get it here:
        http://downloads.techrepublic.com.com/download.aspx?docid=177995

        Look at the pages 38 and 40. On the former you can read that “You use HTML to say what a web page is, and CSS to say how the page should look.” and “You use JavaScript to say what a web page does.” The latter page depicts that in an even better, easy to understand way, with the browser integrating all the elements into a web page that the user sees.

        What I see as an acceptable alternative for Flash, AJAX, applets, etc. is a new standard for both web browsers and web applications. A standard that promotes even richier user interfaces by demanding more standardized support from the browsers and web servers, and which gives more power into the hands of web developers. Now that’s something I’d call a Web 2.0.

        Such a thing is brewing currently, it’s called Web APIs, and you can find it on the World Wide Web Consortium page.

        At the point when this fires off, a new generation of web browsers, web tools, IDE’s and books are going to show. That is a moment I await patiently, and at that point we can discuss the then-outdated technologies, like Flash, applets and the like. Hope you all will benefit from that as well.

        Cheers,

    • #3151356

      Passionate Programmers

      by justin james ·

      In reply to Critical Thinking

      There was a lot of fantastic feedback from readers regarding my recent series, “How to Think like a Programmer” (Part 1, Part 2, and Part 3). Two readers, staticonthewire and Mark Miller made some extremely astute comments regarding the role of emotions. The series left out emotions, but that is not to say that they do not play a role. staticonthewire remarked that, contrary to my example, he has met scientists who made lousy programmers ? and a mystery novel buff who was fantastic, and that the difference was passion for programming.

      I cannot agree more. A programmer who treats their job as a way to make money and nothing more will rarely offer make than what they are asked to give. These are the programmers who churn out mediocre code on a mediocre project, do not involve themselves in making better software, just meeting the project specs, and so on. The programmers that are passionate about technology and code writing are the ones who will go home and research the best way of doing something, write code in their spare time, learn new things after hours, and more. These are the programmers you want to hire.

      My initial experiences with programming were amazing. I loved having a problem to try to solve, and figuring out the solution using nothing more than my own wits. We did not have Internet then, the answers were not a Web search away. You succeeded or failed based upon the knowledge, experience, and skill of yourself and the people you were working with. The debugging tools were primitive at best; dumping variable state information to the screen or a file was the best and easiest way to see what was happening at run time. Frequently, I would print out all of my source code, and follow it through by hand, inventing user input and seeing how the code operated in response. I could not have been happier.

      What I love about writing code is the nitpicking and problem solving. A few days ago, someone asked me to describe what it is like to do my job. I told them, “I am a professional nitpicker. Every detail counts. On top of that, my job is to think of the craziest way someone could use my product, and then have it continuing functioning.” I think that sums it up pretty nicely. I enjoy programming for the same reasons that I enjoy reading great books, doing crossword puzzles, listening to “artsy” music, looking at modern art, and watching David Lynch films. It is really all the same type of thinking, just a different medium.

      It seems that in the last ten years or so, colleges started moving away from teaching Computer Science as an offshoot of mathematics, as a mostly theoretical discipline, to teaching Computer Science as a vacation. Instead of learning about “big endian systems,” “O numbers,” formal logic, and so forth, CS students are being taught how to write code in Java, SQL, and other languages. They are being given a four year education in “best practices.” These are “shake and bake programmers.” The passion, the excitement of solving problems has been taken away, to be replaced by painting by the numbers. The worst part about being taught programming at the vocational level, is that the industry changes rapidly enough that whatever you learned is going to be outdated by the time you graduate. Chances are, it was outdated by the time it got into the curriculum.

      Programming is increasingly a matter of gluing together libraries written by a few select people, the ones who are having all of the fun. At this point, the places where truly interesting codesmithing seems to occur is in the shops making development tools (Sun, Microsoft, Borland, etc.) and the small places doing niche work. Some of the FOSS projects are extremely interesting as well, and they have the advantage of not caring about profits, so they are free to work on unusual and creative projects regardless of potential market size. Anyone in between these types of environments is just gluing together libraries written by a big-time player into a standard, boring C/R/U/D application.

      One thing I have found over the years is that when I am enjoying myself at my job and being challenged, the passion is there. Put me on a tight deadline to come up with something that seems impossible, and you get great results. If you hand me a project with a long deadline and involves zero creativity or problem solving, it is likely that I will enjoy myself as much and the project will drag. The passion just cannot exist for a dull project. I think this is one of my big concerns with the IT industry. There was an extremely long period of time where things were being changed constantly, and when a piece of software or hardware, or a way of doing things became obsolete, it was replaced by something markedly better and more interesting, and more pleasant to work with. Every day brought exciting change.

      It is a shame that the world of programming seems to be moving slowly towards “shake and bake” code. Maybe the direction will reverse? I certainly hope it will! I am passionate about programming. I am one of those twisted individuals who writes code in their spare time. I rarely have a day truly off of work, because even when I have the day off I cannot resist popping onto the VPN and doing a few things. For me, writing code is like playing Civ III, but instead of “just a few more turns!” it is “just one more subroutine!” When a project is rolling, I have been known to spend 16, 24, sometimes 50 hours straight at the office working on it (they know better than to make comments about the smell, and to just smile and appreciate the results). This is not too uncommon for programmers to do, and has been the source of jokes for decades. How many other jobs are notorious for having these types of things happen? Let’s hear it for passionate programmers would wide, and give ourselves a hand.

      J.Ja

      • #3161271

        Passionate Programmers

        by mark miller ·

        In reply to Passionate Programmers

        Passion can be applied in different ways. I enjoy .Net quite a bit, but only because it supports my application development efforts better than past tools/environments I’ve used. Since my interest in Lisp has been piqued, I’ve thought about the idea of including it in future .Net projects (support for that exists) where most of the code is written in C#. I think where it might come in handy is in dealing with some thorny algorithmic problem. The only thing I’d be worried about is if I’d be allowed to do it. I think most employers/customers (if they chose to be aware of such things) would be wary of that, because they might be worried about being able to maintain the code in the future.

        I’ve worked in MFC for a while (about a year and a half), and I don’t appreciate it as much as what .Net offers. If I had all the time in the world I might prefer working in C++ for some projects. I find that it’s more difficult to move the ball forward with it, but you do get faster programs.

        My interests lie in building applications with business logic involved, writing code libraries and utilities, and in programming languages.

        One of my first jobs out of college was working on a script interpreter for a report generator. It was challenging but very engaging for me. Unfortunately it was all dependent on an in-house customized database structure. When we found out the system was just not going to serve the customer’s needs, we dropped it, along with the report generator project. If there’s one thing I hate more than anything else is working on a project that doesn’t get used. I’d like my work to count for something. We ended up switching to using Windows 3.11 and a relational database model. I worked on that for a bit in C, and it was very interesting. There was a lot to learn, I liked the support that Windows had, and the visual GUI aspect was a real hook for me. We also made faster progress in getting the project done. That was nice. We had less budgetary pressures.

        Having code library support isn’t such a bad thing. What I bring to my work is the knowledge I gained from not having that support. In the line of work I’m in though, I would never want to work on a project solely using technology that had minimal tool and library support. A project like that would have “death march” written all over it. I’m not trying to say that’s a universal truth. Obviously if someone’s working on a totally innovative concept, then of course the library support is going to be minimal, but that involves working on a project with a timeline and expectations that are very different from the projects I work on.

        Like I said I like working on code libraries and utilities, too. I’ve had a few rare opportunities to work on such things. They give me an opportunity to work on purely geeky design issues. Not that I don’t like visual design as well. It’s just different.

        Since I got out of college I’ve worked on custom business applications, and they’ve usually involved working with a database. I think the work suits me well. I like the variety.

        I think you’re right that the CRUD work is the most boring. As I was writing this I was thinking about past projects I’ve worked on, and the most boring aspect of a job role I had at one of my past employers, back in the mid- to late-90s, was where I spent most of my time doing CRUD work in C. It was challenging at first because what I did involved validating the referential integrity of data. Pretty laborious, and you have to be meticulous about it, lest you miss something. There came a point though where I wanted to get out of doing that SO badly. One approach we tried was making the code object-oriented in C++, since I noticed that a lot of the C code I was working on was just repeating patterns. That would’ve at least given me a creative opportunity to try and automate the CRUD process a little more. But that got abandoned when we realized that in reality C++ on Unix was not as standardized as we would’ve liked. Every customer had a different system with a different C++ compiler (if it was even installed), and none of them were guaranteed to have the same feature sets, like universal support for templates and all their features. By then ANSI C was rock solid. It was supported on every system. I eventually quit the job because it was holding me back.

        Nowadays I try to find interesting projects, ones that have CRUD as a component (I like database applications and this goes with the territory), but involve significant business logic, or working with a technology I’ve never tried before. .Net has made dealing with CRUD easier (love them data adapters!). I try to use data adapters judiciously. There are times when using a Command object and a DataReader is better suited to the task. .Net 2.0 makes dealing with CRUD even easier, from what I understand, since it supports two-way databinding. That’s something to look forward to. The less I have to deal with it the better.

      • #3161240

        Passionate Programmers

        by justin james ·

        In reply to Passionate Programmers

        Mark –

        You are 100% right about .Net and the C/R/U/D! I am not a huge fan of VB.Net or C# because I think that they are fairly weak as languages, but I really, really like the .Net Framework, as it does so much to alleviate the hassle of dealing with data. I have found though, that the weakness of the “big two” .Net languages, once the C/R/U/D parts are finished, actually slows down a project. I think there is probably a ratio of business logic to data work at which VB.Net and C# turn around and become less efficient to work in. I am rather grateful that all of the .Net work I have done has not hit that point.

        I was really disappointed in ActiveState’s Perl for .Net (in the PDK). I was hoping for something that would seamlessly plug in to Visual Studio and let me work from there; instead, it just compiles a DLL that “talks .Net.” It would have been a great opportunity to bring a truly elegant, powerful language to .Net, and help bridge the gap, but instead it just does not work. The strength of .Net, as well as Visual Studio (indeed, every in the modern Microsoft universe) is that it all ties together so nicely. Despite all of the inefficiencies and hassles involved a lot of the time, it really does take away much of the “busy work” of building an application. But to make a system where some of the code gets written and compiled outside of the Visual Studio system just does not work for me.

        I am seriously looking into F# (Microsoft’s implementation of O’Caml, plugs right into Visual Studio) as I want to be using a dynamic language with support for eval(), and the PDK is not where it’s at for me. I will be blogging soon about F#, once I have some time to build a few small things with it. I would also like to see a Lisp that plugs into VS for the same reasons you want to work with Lisp. For any kind of complex business logic, Lisp, Perl, and other interpreted languages are the best thing going, especially if you can deliver a customized file of modules to individual users or customers, and have the compiled end of the application be a framework. This is where I am headed with my applications, as I have customers who all want the same application, but have radically different business rules, and I simply cannot maintain a separate code base for each customer. My only other real option (and considering what I am working with, it is not very viable) is Web-based applications in an interpreted language, which sticks me with either Perl (too much effort on the C/R/U/D end) or PHP (bleh, all around).

        J.Ja

      • #3146167

        Passionate Programmers

        by mark miller ·

        In reply to Passionate Programmers

        I’ve found Two Lisp.Net languages, but they don’t integrate with VS. They do however provide script engine capabilities. They are:

        L# – by Rob Blackwell – http://www.lsharp.org

        According to the author it’s supposed to be a .Net implementation of ARC, which is a Lisp-like language. It has an eval function, and can be used as a script engine inside your own program, coded in C# or VB.Net. It also has a command-line interpreter so you can try out code.

        DotLisp – by Rich Hickey – http://dotlisp.sourceforge.net/dotlisp.htm

        Has an eval function in the language. Looks to have more meat to it in terms of operators it supports, and provides more scripting and interop support. It is not a faithful implementation of any standard Lisp. Instead its stated purpose is to be a “Lisp for .Net”. So it has some .Net characteristics that are not compatible with standard Lisp. You can use it as a script engine for your application. In addition it provides a facility so that you can provide hooks into your own C# or VB.Net code, accessible by the script. You can either feed it a script from within your own code, or have it load a script file and run it. It also has an interactive interpreter so you can try out Lisp code at a command line.

        GotDotNet has a bunch of .Net versions of programming languages. The only functional programming language I’ve found in their list so far that integrates with VS (besides F#) is SML.Net. At least it says it integrates. There’s a version of it you can download that contains the extra VS integration stuff.

        I remember using SML while in college. I found it to be an understandable language. It had enough of a procedural feel that it wasn’t a total culture shock, though it seemed more geared to pattern matching, a bit like Prolog.

        You might also be interested in looking at some of the other language projects that GotDotNet lists on its .Net Language Developers Group page at: http://www.gotdotnet.com/team/lang/

        The languages it lists (beyond the ones we’ve already mentioned) are:

        .Net IL Assembler
        Ada
        APL
        AsmL – the Abstract State Machine Language
        Cobol – from Fujitsu
        Delphi – from Borland
        Forth
        Eiffel
        Fortran (2 versions)
        Haskell
        Lua – embedded scripting language
        Mercury
        Mixal
        Mondrian
        Nemerle – a hybrid functional/OO language
        Oberon
        Pascal
        PHP#
        Prolog (compiles Prolog to C#)
        Python
        RPG
        Ruby (an interpreter, and a Ruby/.Net bridge)
        Scheme (2 versions)
        Smalltalk (2 versions – one says it’s a compiler)

      • #3146705

        Passionate Programmers

        by billt174 ·

        In reply to Passionate Programmers

        Ihave been known to spend 16, 24, sometimes 50 hours straight at the office working on it

         

        Sounds like you need a life.

        Don’t mistake passion for obsession. The reason you are on this planet is not to make software it’s to make love, make friends, make food…

        Making software should be just a job not a life.

      • #3146579

        Passionate Programmers

        by snoopdoug ·

        In reply to Passionate Programmers

        I’m one of those who graduated in the day when CS meant computer science–learning algorithms, order of, NP, etc. Wrote my own b-tree balancing code, quicksort, etc. Did it make me a better developer? Heck if I know. It is axiomatic that whatever path led to your success is the “One True Path” ™. Good developers are made. Most developers would like to have exciting and challenging jobs, but that’s not going to happen, that’s why it’s called “work” and not “fun” and why we get paid.

        Sweating the details is a crucial part of being a developer. Maybe more so than liking to solve puzzles. You bet the job gets tedious, but tell me one job that does not have tedium. If you enjoy the intellectual challenges of development, you must also tolerate the periods of boredom and push yourself through.

        Now don’t get me started on meetings…

        doug in Seattle

      • #3157031

        Passionate Programmers

        by apotheon ·

        In reply to Passionate Programmers

        I’m kind of disturbed by the concept of L#. I wonder how there can be a functional .NET implementation of Arc, considering that Arc isn’t finished yet (and doesn’t have a projected finish date, either). Sure, Paul Graham has written a very effective heuristic spam filter in a working version of Arc, but the language still simply isn’t nailed down.

        I’d be more inclined to go with one of the .NET Scheme implementations, at least until Arc is approaching completion. L# is likely to suffer from being trapped between the unpopularity of Lisp in the .NET community and the looming specter of an evetually-finished Arc, and thus will likely never mature much because it just won’t get the kind of development attention it needs — at least until Arc is finished, and even then only if L# keeps pace with Arc. Keeping pace with Arc presents its own problem, of course, because it means that the parser is a moving target for L# programmers.

        Yeah, go with a .NET Scheme implementation instead, I think.

      • #3155361

        Passionate Programmers

        by emanueol ·

        In reply to Passionate Programmers

        🙂

        I love programming since 12 years old (Sinclair machines) because Programming give us the chance to change the world without needing anything/anyone else.

        Machines don’t make our life hard as most people do directly with their social/influential schemes.. This seems a bit filosofical, but hey, it’s something I felt young and now it has being democratized by giving Internet to people.. seems humans using computers as proxies to other humans makes sence, maybe because we all together are part of a super wider being (I call it “the super being”), that is like a human/emotional network connecting all people. So, Internet just allowed that biological human feature to work in a daily basis and widespread to everybody in the world. We are all nodes on a bigger emotional network (some people and religions call this.. God) 🙂

        Getting back on my comment -> Or in other words, we are completely independent (we just need electricity to work/live), in order to build abstract universes (a bit like acting as God.. hmm god again lol) but powered with a non-selfish mission statement of “help the biggest number of humans as possible, by doing something I love”.

        All the biggest social/economical impact events ALWAYS arised/arise from a small number of passion people (1? 2? e?) that believe in something, work hard on putting in real world their ideas and feelings, and because there was Emotions as the genesis motivation, of course everyone that will use their solution will indirectly feel the same emotions (beeing it to build the first Plane, Ford: the first car, Sinclair with home computers), so programming give us the same type of means but with the advantage that it’s 100% money free, you don’t need to buy wood, iron, and raw materials to build things 🙂

        Only now, in the XXI century the business-world is understanding that after all life is all about emotions.

        I feel very happy with the advent of Internet, since it brought to non-programmers humans beings the same strong feeling I felt with 12 years old when I had my first ZX80 🙂 I’m sure the majority of programemers also share this feeling 🙂

        Check Google 70%-20%-10% types of time use at work, for example (70% for “normal” project development – execute Managers strategy; 20% work  on your own internal project YOU like; 10% for sports/leisure/sleeping/etc..)

        And, I got a friend of mine that just made a PhP on NetValue involving emotions; proximity networks etc.. that is already thinking that “the way” for employees to use their time at work is a even more ambicious rule 80%-20% squeme (80% work on whatever you want, we provide you with the resources + 20% hey! show us how evolving is your dream coming :)… hmm… I guess that’s why Open Source has evolved so much 🙂 In the future, products will be free, people will only pay for services. And that’s the way to achieve Quality of Life for everyone.. it doesn’t make sense for the human race to work as slaves to something called “Enterprises/Companies”.. hey, hello?!.. it’s supposed to be the other way round.. People working for people.. and Internet – once again – allowed and it continues to evolve the world to this new model era.

        Ok, over and out, Love, Cheers and May the Force Be With You Always ~

        Emanuel Oliveira

         

         

      • #3166017

        Passionate Programmers

        by smiklakhani ·

        In reply to Passionate Programmers

        Programming is about passion. There’s no doubt. I mean, you can learn all the jargon and theory you want to learn, but to really program you have to have the passion for it. I’m a Higher Level Computer Science student from East Africa studying under the International Baccalaureate.

        I’ve had a passion for programming since the age of 11, when I first started learning BASIC. Now, in Grade 11 I’m gradually developing my skills in Java. I’ve covered tonnes of theory – QuickSort, Binary Search, Linear Search, Linked Lists, Recursion etc. etc. And it might be tiring for some after a while, but an interest in programming has kept me going.

        Some people say that you just have to memorise some of the code. I don’t believe in that. When I sit for an exam or a test, I recreate the code because I prefer understanding the computers mind and then writing code based on that.

        It’s truly like playing Civ III (or Civ II for that matter) as J.Ja describes it. You’ll spend tonnes of time on it. It will undoubtedly look boring to someone else (partly because it won’t make any sense). But most likely, it’ll be a masterpiece to yourself, and that’s what matters.

        Smik.

    • #3147262

      Review of Information Dashboard Design: The Effective Visual Communication

      by justin james ·

      In reply to Critical Thinking

      Last week, I read the first chapter of Information Dashboard Design: The Effective Visual Communication of Data (ISBN: 0-596-10016-7) by Stephen Few, courtesy of the publisher, and available as a free download on TechRepublic. I liked what I read, and purchased a copy of the book. I finished reading it today, and wanted to share a bit about it.

      I enjoyed reading this book. I have studied usability theory for many years (primarily Jakob Nielsen, but other authors as well) and did not find anything in this book to contradict what I already knew. I found the information in this book to be very useful in and of itself. Even better, the information in this book is extremely applicable to other less-specific forms of communication, such as Web sites in general, print, and so on. The author does an excellent job explaining the best uses of different types of graphs, data display, how to separate different groups of information on the screen, on so on. The author also (wisely) completely sidesteps any mention of any particular software packages. Because of this, this book will remain pertinent for a very long time. There are many excellent examples of good data presentation as well as bad data presentation.

      The book itself was rather lightweight. Significant portions of each page were whitespace and the book was printed in a rather large typeface. The book itself claims 223 pages, but the actual content was more like 200 pages. Furthermore, much of page space was taken up by illustrations (reasonable, since the book is about visual design). There were also many “filler” pages throughout. Overall, I definitely got the impression that the book had a lot of “puff” to it to help justify its price tag. I read the book in less than two working days, merely by reading while my Excel reports were running. I think it was well under two hours to finish the book.

      This leads to my biggest problem with the book, which was that it was extremely basic. The examples in the book of bad design show that either the vast majority of the people writing interfaces are clueless, or that systems with poor usability sell better than those with good usability. Overall, there was very little new to me in this book. But that is me, and the proliferation of unusable systems shows that the industry could use an easy to read, basic guide on data presentation. If you are new to the field of data presentation, GUI design, or Web design, or think that a basic text on how to show data effectively would be helpful, this is the book for you. If you have experience and knowledge in usability already, you will probably find this book to be a little bit too simplistic and general for your needs.

      J.Ja

    • #3155152

      Web Development vs. Web Marketing

      by justin james ·

      In reply to Critical Thinking

      This is the first in a two part series.

      Developers are often cocooned in a shell, isolated from much of the greater operations of their employer. All too often, IT is treated as a black box, frequently with GIGO (garbage in, garbage out) results. Project specifications are often driven by people and departments with little understanding of technical matters, and the people implementing the specifications have little understanding of the why of the project. When the why is “create or expand an Internet revenue stream,” the result can be disastrous.

      Many Web sites (especially online stores) are, and should be driven and run by the Marketing or Sales departments. They are experts at selling the product. They know the market, the competition, and more. What they do not know is Web development, and on occasion they do not understand Web design either. This creates bad project specifications, where the how is dictated in a demanding yet vague way, and the why is completely ignored. I have been given project specifications similar to these way too often:

      * Look similar to site XYZ, but with our logo and colors
      * Have a drop down menu system like site ABC
      * We want to collect the following information from visitors at every opportunity: [insert list of the most personal and intimate details of the visitors’ life here]
      * Spiffy Flash intro
      * A lengthy “About Us” page that gives the users plenty of pictures of our CEO and facility
      * A shopping cart system, preferably one that does everything that a live salesperson would do, with a strange interface and bizarre options for the user
      * “Community building” features, such as a forum, product ratings, instant messaging between registered users, wikis, blogs, RSS aggregation, and a million other buzzwords of the week
      * “Viral marketing” systems like affiliate programs

      Sound familiar? That pretty much sums up nearly every Web store project specification that has been dictated to me. It is pretty interesting how most of the items on the list are, at best, only loosely related to the why of the project: to sell products! Here is what a why oriented project specification might look like for a Web store:

      * Must appeal visually to our target audience: [insert target market demographics here]
      * Site must be extremely usable by our target audience, and very usable by non-target audience members
      * Site must present the most useful and interesting product information up front, but allow visitors to get as much product detail as they need, to simultaneously encourages sales, as well as minimize the need for customers to contact us before making a purchase
      * Site must be consistent with our corporate “look and feel” in a manner that does not compromise usability
      * Site must be a secure as possible
      * Site should foster a trust relationship between our company and the customers, particularly new customers
      * Site must have search engine optimization baked in from Day 1

      See the difference? Interestingly enough, not a single item on the why oriented project specification precludes a single thing on the how oriented list; however, many of the items on the how oriented list can preclude items on the why oriented list, depending upon a number of non-technical factors. For example, if the target audience is “well-to-do, young graphics designers using Macintoshes who are at work,” a site with a lot of Flash pieces that often emphasizes form over function is not likely to hurt sales, and may even boost them a bit. If the target audience is “retired people on limited incomes,” chances are that anything with a font size smaller than twelve points, or that requires “the latest and greatest” in plugins, or a good deal of technical savvy will not be able to sell much product at all.

      In other words, we have a classic case of missing the forest for the trees. The trees are technological and aesthetic hows. The forest is the business why of the project in the first place. Even worse, it is not enough that when the development focuses upon technology that the business reasons for the project are forgotten. In all too many cases, the technology actively prevents the business reasons from ever being fulfilled. In the next article in this series, I will provide some examples of how focusing upon the technological aspect of Web development as opposed to the marketing side of a Web project can hurt more than help.

      J.Ja

      • #3155075

        Web Development vs. Web Marketing

        by dawgit ·

        In reply to Web Development vs. Web Marketing

        Sounds famillular, so far.  Now I’m waiting for part II.

        I know you might not mean it so, but this is a Humorus way to look at it (ok, it’s more ‘Sad-but-true’)

      • #3155018

        Web Development vs. Web Marketing

        by justin james ·

        In reply to Web Development vs. Web Marketing

        dawgit –

        I do aim for a dark humor sometimes, happy that you were able to get a laugh out of it. 🙂 These really are those “shake your head, it’s sad but true” types of scenarios. The last freelance Web store I worked on, the initial project specs sounded a lot like “installed some typical software package, make one or two small changes to the base code, edit the templates, and get paid.” I have invested well over 200 hours of my time into the project, and it still is not finished, mainly because the customer wanted some bizarre “system builder.” In a nutshell, they wanted something along the lines of what Dell does with their website, but even more comprehensive.

        I keep wishing for the customer who has never seen a computer before, and just says, “here is what I do by hand on paper, please find a way to have the computer make this more efficient.”

        J.Ja

      • #3157667

        Web Development vs. Web Marketing

        by hchelette9 ·

        In reply to Web Development vs. Web Marketing

        Absolutely 100% on the mark – and not limited to web projects, either! Unfortunately, I’m afraid you’re “preaching to the choir”. Those of us likely to read this are NOT the ones perpetrating the problem. Even the few of the marketing types that might be chance to read TechRepublic are likely to be in denial and refuse to consider that their specs could be difficult or problematic. Will there be an article on How to get your Marketing Department to get a clue?

      • #3157664

        Web Development vs. Web Marketing

        by csturman ·

        In reply to Web Development vs. Web Marketing

        If you really want to have fun with marketing types, start inventing buzz words and see who picks them up.

      • #3157585

        Web Development vs. Web Marketing

        by snoopdoug ·

        In reply to Web Development vs. Web Marketing

        When the marketing/sales droids pester me, my first question is “What are you trying to do or not do?”. Unfortunately most of us tech types are not good at handling the fragile psyche of others. We tend to say things like “That’s stupid” or “That will never work”, instantly converting an opportunity to work compatibly with a colleague into a battle of wills and egos.

        Stop yourself from your first, blunt response. Let your ego go. You might learn something.

        Try putting the droid at ease with off-putting responses like “I’m kinda out of my element in this look-and-feel thing. Could you walk me through what we are trying to accomplish by putting a Flash intro on the home page?”. You never know, they might even have a good idea.

        Also, don’t forget to get them to prioritize their wishes. Is the Flash intro more important than the shopping cart? If it’s crunch time, can I push the Flash intro to the next release?

        Ask them for their marketing/sales analysis. Who is the customer? What are the demographics? How many hits do we expect? Make sure they have done their due diligence before committing to the work. If they are flying from the seat of their pants, you are building something that may not solve anyone’s issues.

        And most importantly, do not let them bully you into a killer schedule. Roll out the updates in the order of importance. In my experience, they are more interested in the look-and-feel than the technology behind the application. I spent three weeks dummying up an app for a demo. All they could talk about was the crappy interface. Find a way to abstract the look-and-feel (ooh, I know, XML config file! The buzzOmeter will ring off the hook) and you will be way ahead.

        doug in Seattle

      • #3165766

        Web Development vs. Web Marketing

        by hamish_nz ·

        In reply to Web Development vs. Web Marketing

        Thanks, that is some useful information, although I don’t agree that Flash necessary equals bad accessability. A well designed Flash site will work on practically any platform, whereas other formats can be highly browser/security setting dependant.

        I would hazard to say that you have it the wrong way round – a tech-savy user who knows his keyboard shortcuts might find the Flash site frustrating, whereas the grandparents would probably appreciate the point and click nature of Flash applications.

        That is some excellent advice though – I think that Marketing and IT people think in similar ways sometimes, we can be much more interested in the how, so it is good to remind everyone where the focus should be.

    • #3156646

      How Web Developers Can Hurt Web Marketing Efforts

      by justin james ·

      In reply to Critical Thinking

      This is the second in a two part series.

      In my previous blog (Web Development vs. Web Marketing), I discussed some of the ways project specifications can interfere with the business reasons for the project. Today, I will take a look at just how technical implementations, while being technical correct or acceptable, can hinder and even hurt a Web site’s ability to fulfill its purpose. To illustrate, I am going to use the sample “why oriented project specification” that I laid out in the previous article. To repeat it:

      * Must appeal visually to our target audience: [insert target market demographics here]
      * Site must be extremely usable by our target audience, and very usable by non-target audience members
      * Site must present the most useful and interesting product information up front, but allow visitors to get as much product detail as they need, to simultaneously encourages sales, as well as minimize the need for customers to contact us before making a purchase
      * Site must be consistent with our corporate “look and feel” in a manner that does not compromise usability
      * Site must be a secure as possible
      * Site should foster a trust relationship between our company and the customers, particularly new customers
      * Site must have search engine optimization baked in from Day 1

      This is actually the basic skeleton for any good project specification. Note how it focuses upon why the Web site is being created in the first place: to sell products as efficiently as possible! A site that looks great and have cool widgets all over the place, but is totally unusable and baffles search engines will not make much money at all.

      Must appeal visually to our target audience

      I would have hoped that garish websites, graphics heavy websites, and so forth would have died with the late 90’s. Sadly, this did not happen. Thanks to the broadband boom, Web designers and developers now feel free to load a Web site with 200KB worth of graphics. I guess these developers all live in major cities. Most parts of the state I live in (South Carolina) are still on dialup. Too much of the population either has no cable, or is so sparsely populated that the phone company is not going to put Cos for DSL all over the place. At best, these users are on satellite, which I have found to be unreliable and slow from my days of network management. South Carolina is not an isolated exception. Do you think that a farmer in the Midwest or someone in the Dakotas is likely to have a fast Internet connection? Even if your Web site is aimed at business users, you should keep in mind that many companies frequently under provision their offices. Most users simply do not (or should not) need to spend much time outside of the LAN, and what bandwidth there is, is frequently filled up with inter-office communications. In other words, when you have 20 users on a single T1, the bandwidth available to each user is not so much. Web sites should still be extremely lightweight, end of story.

      Site must be extremely usable by our target audience, and very usable by non-target audience members

      This is another pain point, on many levels. Those nifty menus that you designed may work for you, but they probably confuse a user. If they drop down automatically on a mouse hover, that is different from how desktop applications do things, and is confusing. If they need to be activated by a mouse click, you have confused your user because a mouse click in a Web site is “supposed to” (in the user’s mind) take them to a new page. Even worse, did you break the user’s right-click functionality, such as the ability to open the link in a new tab or window? Will the top level menu item take the user someplace useful as well, or is it just an access method for the menu? Can a screen reader read the resulting menu, so that vision impaired users can use your site? Will my mother be able to figure out your menu system? Avoid those drop down menus, they are simply a crutch for Web designers and developers who do not know how to construct a useful, usable navigation system.

      AJAX is another usability breaker. It is nearly impossible for vision impaired users who use screen readers to work with. Anything requiring drag/drop or right clicking (they are acceptable if they are not the only way of using the site) is a bad move too. In fact, anything that requires any type of input other than single clicking with the left mouse button is an utterly unable system. You do not want to require your users to experiment beyond the lowest common denominator of input (single left mouse click) to use your site.

      Sites that look like desktop applications are another usability disaster. You are confusing your user. Even if the whole Web went to “desktop-esque” sites tomorrow, there is still the problem of everyone having a different idea of how to make their pseudo-rich clients look and function. If you want your users to give up and walk away from your Web site, try to make it look and act like a desktop application and not a Web site. When the user opens their Web browser, they expect Web sites, not applications. That means standard form widgets and links only. That means that text or graphics that look like links should function like links, including retaining the browser’s right-click functionality. That means that saving the page as an HTML file should work, and the user’s copy/paste functionality should not be impaired. It means a lot of things. Keep your Web page a Web page; if you want to add additional functionality, it should be marked as such so that the users knows that they are using an application. Microsoft made this mistake with the new MapPoint web site; the site is useless and unusable now.

      Site must present the most useful and interesting product information up front, but allow visitors to get as much product detail as they need, to simultaneously encourages sales, as well as minimize the need for customers to contact us before making a purchase

      This one should be obvious. Many Web sites fail on this one, not due to over engineering the site, but due to under engineering. Products that only have a small thumbnail image of the product, database schemas that do not allow detailed product information, or force the information display to be so universal that products with special features do not have those features highlighted, short descriptions that are just truncated versions of the long description; these are all ways that an under engineered Web site can fail to present the right information to visitors at the right time.

      Site must be consistent with our corporate “look and feel” in a manner that does not compromise usability

      Frequently, Web sites try to incorporate too much of the companies non-Web branding into the site in a way which kills usability. For example, putting a Flash intro (or any kind of intro page) is a disaster. It is also a waste of time. Chances are, your users are coming from a search engine and will bypass that page anyways. What works for print and what works on screen are two entirely different things, right down to the choice of fonts. For example, Arial is more readable than Verdana in print; but Verdana is more readable than Arial on the screen. Times New Roman is even worse on the screen, while still being a great choice in print. Splitting the main text into columns is another items that works in print and fails on the screen. Sidebars, light watermark graphics under the text, “splash graphics” all over the place; these are all things that work great in print but fail on the screen. Similarly, the way your customer presents itself in television ads is not how you should be designing your Web site. TV ads are meant to get the audience’s attention before they duck into the kitchen or bathroom (right down to having a higher sound volume, so they can be heard in another room). Your Web site does not need to be doing this; if it is on their screen, it does not need to gain their attention. They have already chosen to view it. And for the love of all that is good, do not force your users to watch your TV ads on the Web site.

      Site must be a secure as possible

      Every new feature you add to a site, if it involves sending data to the server, is a potential security hole. Too many “perpetual beta” pieces of software have security bugs. Even worse, updating third-party Web applications is usually an absolutely nightmare. In a nutshell, go ahead and use some “super cool alpha release” piece of code from SourceForge. But be prepared to sit on top of that application, checking their bug tracker, looking for updates, performing security audits, and so on. Constantly. It will be like 1997, all over again! Woo hoo! Go ahead, add an RSS aggregator to your Web site; just be prepared to take the blame when malicious code is dumped to your users (or your servers) via the feed. Use that third party survey system (which no one fills out anyways) if you want; but when a poor use of sprintf() lets some malicious user take over your server, accept the blame. Install some goofy search system, I will not be offended. But I will not have any sympathy if you chose a product that kindly offers your database information to users when it bombs out, and has a SQL injection hole in it to boot. To repeat myself, every new feature you add to a site, if it involves sending data to the server, is a potential security hole.

      Security and usability also do not play nicely with each other. It can be a fine balancing act. I am indeed sympathetic to it. But when make your users have a strong password, but do not help them make it (the MSDN site has an excellent system for this, by the way; it has a little “security meter” that goes from red to green in real time, to indicate that you are meeting the requirements), then they are quite likely to give up and go to another site to make their purchase.

      Even things like user comments or reviews can kill a website. All it takes is to have a poor filtering mechanism that allows someone to put in some CSS code to make your site look bad, or to be spammed, and make the system worse than useless. Captchas are not the solution either; there are plenty of tools out there to break them. If your site involves trading money for a product or service, you can be sure that someone will try to rip you off.

      Site should foster a trust relationship between our company and the customers, particularly new customers

      This one is pretty simple. Avoid sleazy things (like popup ads, banner ads all over the place, the awful “Ads by Gooooooogle”, etc.), and your users will get the impression that you are a professional company and will be more likely to trust you. A site that spews out JavaScript errors all over the place does not foster trust. My first thought when I encounter a site with JavaScript errors is, “Wow, their programmers are incompetent. Will it be safe for me to let them have my credit card number?” Use your own ID systems, do not make the user give you their Social Security number or driver’s license number unless required to do so by law. In fact, do not even ask for it. And if you are required to do so, make sure that you make that clear to the user. And when you ask for it, let the user know how it will be used, and how that data will be protected. Do not store your user’s credit card numbers without asking them first (indeed, this is the law). And so on.

      Site must have search engine optimization baked in from Day 1

      This one is the ultimate killer. A five percent increase in traffic from organic search results is a five percent increase in sales. A site in the Top 10 of the major search engines will outsell a site on page ten of results by a disgusting amount. To put it bluntly, anything other than pure, static, standard compliant HTML is going to hurt your rankings in the search engine. Sure, your super nifty Flash application may look awesome. But how is the Marketing Director going to react when he finds out that if he wants top search engine placement, he will need to spend thousands of dollars a month on pay-per-click ads? Even worse, pay-per-click ads are not nearly as effective as organic search results; an organic search result listing is about twenty times more likely to be clicked, and around 14% of all pay-per-click page views are click fraud. Do the math. Is taking that kind of hit to your search engine ranking worth that kind of loss of search traffic? Are you hoping to make up for it by having other sites link to yours? If so, ask yourself this: how are those other sites going to find out about you in the first place, if you are not well positioned in the search engines? There is a reason why Google, Yahoo, MSN, and other sites earn billions of dollars in revenue from their ad systems. Search engines are that important.

      Unless you are a site like Amazon or eBay, where everyone who is looking for what you sell is going to go to your site first, you simply cannot let technological wizardry stand in the way of your search engine rankings. A site that gets zero visitors makes no money, it is a simple equation.

      What does this mean for the Web designer? It means that anything that is dynamically generated should come out as static HTML code. It means that your URLs need to be search engine friendly (which is also bookmark and email friendly too, another usability item). It means using CSS responsibly, to separate presentation from content. It means properly marking up your code, particularly with the header tags, strong and emphasis, etc. so that the search engines know what each page is about. It means providing ALT tags on your images (good for screen readers, too). As a shortcut, try out your site in Lynx or use a screen reader on it. If the site works great in those environments, the search engines will love it.

      It is a great thing that as your site’s usability improves, so does its search engine rankings. Search engine spiders are essential dumb users with technologically backwards browsers and a dozen different physical handicaps. They cannot right click, they cannot run JavaScript, they cannot drag/drop, they cannot use Flash, they cannot use AJAX, they cannot follow JavaScript links, and so on. To have any of your site accessible solely through any of these methods not only hurts your usability, but it kills your search engine rankings.

      I designed a site about eight years ago that has been in Google’s Top 10 for its keywords (very generic terms). It is pure static HTML. We recently redesigned the site, the customer told me one thing: “whatever you do, make sure that it does not hurt the Google rankings. I don’t care what kind of features we might not use, just use the same techniques you used last time, and I am happy.” I would say that this is a pretty strong endorsement of my techniques. At the end of the day, that customer is spending pennies on pay-per-click advertising (he likes to appear twice on the search engines!), while his competitors are spending a fortune. Users tell him that they like his site because it is to the point and easy to use. He likes the site because updates are easy to make. All of these benefits came from eschewing fancy programming for plain, static, boring HTML. And it was easier too!

      So, before you write a single line of code on your next Web site project, ask yourself: does this code address why I am working on this project? Or am I just following the current trend, or having how I do this be dictated to me with no business purpose behind it? Otherwise you may find out that you met all of the project specifications, and have an extremely unhappy customer as a result. Remember, you are the expert on these things, not the customer. They may have some idea of what they want, but at the end of the day, you are supposed to know how to build a Web site. To let a customer make these kinds of decisions without you letting them know what the ramifications of those decisions is simply irresponsible, and your customer will appreciate you more if you work with them to make the best Web site for their business.

      J.Ja

      • #3157053

        How Web Developers Can Hurt Web Marketing Efforts

        by tavis ·

        In reply to How Web Developers Can Hurt Web Marketing Efforts

        Truth.

        LOL. Flash splash screens, magpie collecting from other sites, ignorance of accessibility and web standards, tolerance of JavaScript errors, simple search engine optimization? your article really rings true.

        Of course, when I said much of what you said to our Marketing department, I got taken off the website and they brought in external designers. C’est la vie!

      • #3165421

        How Web Developers Can Hurt Web Marketing Efforts

        by bork blatt ·

        In reply to How Web Developers Can Hurt Web Marketing Efforts

        Agree, agree, agree. Let me just add the following to indicate my feelings on most web site design:

        AAAAAAAAAAAAARGH!

        Good web site design is a mix of technical skill, experience, and understanding and empathising with the user. The good designer will try to understand what their target users want, what is important to them, and their state of mind when they arrive at the web site.

        Let me use Google as an example. Please don’t flame me for mentioning Google – I know strong feelings exist around them at the moment. But Google understands one thing very well – the average user arriving at a search engine wants one thing: to find other websites quickly, and leave! They don’t want to muck around in useless “portals” and have news, adverts, and celeb gossip pop up in front of them, with the actual search bar hidden in some unobtrusive location, with space for about 3 characters using a 3 point font. If you go to Google, the home page loads in seconds.(milliseconds if the graphics load from a cache) Compare this with the MSN home page, which is definitely in portal mode.

        If, on the other hand, your web site is in the “celeb gossip” category, you might get away with more flashy graphics, photos, ads, etc, because your users are more likely to be in “browse” mode. If each page takes 5 minutes to load, however, you will probably lose the user.

        A final suggestion that might help keep the user in mind, is to make up a fictitious user who represents the average person you want on your site. Give them a name and a personality! Give them a realistic level of tech literacy. The more realistic the better. This helps take away the vague idea of “the user” who is some legendary entity out there, to a real person. It is easier to imagine what “Bob Sportsnut” the 19 year old college student thinks of your site than “the college demographic”. Marketers are particularly prone to this habit of dehumanising their target audience, and they wonder why nobody is flocking to their web site which is 90% ads that “appeal to our demographic” and 10% actual information of interest to Terence and Susan out there.

        * the characters mentioned in this entry are fictitious. Any resemblance to real persons living or dead is unintentional. All trademarks are the property of their respective owners. Batteries not included. Barbie and Ken not included. *

      • #3165356

        How Web Developers Can Hurt Web Marketing Efforts

        by crabbyabby86 ·

        In reply to How Web Developers Can Hurt Web Marketing Efforts

        Amen.

        I’ll admit it–I’m 19, and I still have the Harry Potter fansite I built when I was 13.

        Because it was successful.

        Five and a half years ago, when every teenager seemed to have a HP site, I decided I’d give it a try. I studied my HTML standards like a good girl and built myself a gloriously simple static HTML site of about six or seven content-packed pages. Before I knew it, my site was the ‘lucky’ result on a Google search of Harry Potter Rumors. Still is, as it turns out.

        I have never spent any money on advertising. At all. Nothing. I never even joined one of those overblown HP web rings. I did try a couple of free link exchanges, but that never really generated any traffic. I owe it all to unique, interesting content laid out with a very plain table.

        I’ll admit also that I was tempted many times to add fancy stuff. But it turns out that most of my audience in particular was not the most computer-savvy, and my web stats were showing an awful lot of old versions of browsers on dialup connections. For that, some of my simplicity was pure necessity. It’s funny how HP brought so many different kinds of people together, and I taught more than my  fair share of kids how to be a good forum user, personal security and all. Guided a few aspiring web designers. Learned more about the business of web hosting than maybe I cared to know. Made a few friends and became an expert in all that is HP. I’ve had a lot of great experiences through this site of mine.

        It started, and it continues, with simple, static HTML.

      • #3145288

        How Web Developers Can Hurt Web Marketing Efforts

        by colonel panijk ·

        In reply to How Web Developers Can Hurt Web Marketing Efforts

        Nice set of articles. My pet peeve: “This site is best [or worse, only]
        viewed with the _____ browser.” Any boob who uses a page building tool
        that restricts users to only one browser deserves to take it in the
        shorts. Another: “This site requires the latest version of Shockwave/Flash/unameit.”

        A question: Is it worth going to CSS for sites? Can we depend on 99%+
        of customers using a browser that either correctly handles CSS, or at
        least degrades gracefully? There are a few nice things that can be done
        with CSS that can’t be done with plain HTML, but I don’t want my sites
        looking crappy because the IE5 or NS4.5 browser messes up all my CSS code. Any
        guidelines here? I’m happy to write pure HTML pages, and if CSS will
        make them look and act even nicer, that’s great, but I don’t want to
        end up with broken pages.

        Phil

      • #3145154

        How Web Developers Can Hurt Web Marketing Efforts

        by justin james ·

        In reply to How Web Developers Can Hurt Web Marketing Efforts

        Phil –

        My recommendation with CSS is to use static HTML with an external style sheet, and inline style tags where needed for “one off” styling needs. I have had great success with this in the past, and I stick with CSS 1, using it mostly for basic things (no layers, for example). The beauty of this is that even ancient browsers can view the site, and since what you are using CSS for is non-critical, even a site without CSS will work. I will follow up on this in the near future with a blog exsplaining my “running water” philosophy of HTML, which has provided great sucess with zero need to change it (outside of changing the HTML to be standards compliant as the standards change) in my 10 years of working with HTML.

        J.Ja

    • #3165542

      Why I Do Not Support Short Release Cycles

      by justin james ·

      In reply to Critical Thinking

      I am going to say something that will not surprise my regular readers: I am not a fan of rapid application development, agile programming techniques, “test in production,” perpetual betas, the continuous patch cycle, and other current programming trends. I am not the only one, either. I think that this paradigm of programming is incredibly broken. I am of the opinion that patches should be rare, not because the developer should not fix bugs, but because bugs should be rare. I wholeheartedly support only a few minor version releases, and long cycles between major releases.

      I also am fully aware that “this is not how things are done.”

      The current business environment is all about “now, now, NOW!” As a result, we have become accustomed to getting software that is slow, buggy, and insecure from Day 1, and (hopefully) slowly improves through its lifetime. Our best case scenario is software that is only mildly broken, but usable, and it eventually morphs into what it should be. The worst case scenario is a system that as it evolves and new features are added and existing features are fixed, backwards compatibility is damaged, security holes allow sensitive data to be stolen or maliciously modified, and users become frustrated and angry.

      Right now, the development cycle is predicated on the first mover principle: get a product, regardless of quality, out the door, dazzle users with the feature set, and hope they stick around long enough for the problems to be fixed. Sadly, what ends up happening is that another competitor invariably pops up, so instead of fixing the existing bugs, new features are cranked out in an effort to keep the existing user base and maintain growth. The new features are buggy too, the old features become buried, and the code gets crustier and kludgier until it is a warm pile of compost. Eventually, the only way to fix the problems is a complete tear down and rebuild.

      At the end of the day, I have a responsibility to my customers. That responsibility is to solve the why of their business process, whether it is selling products online, properly determining their sales, marking hazardous shipments in accordance to federal law, or whatever. The why is not “join the latest technical revolution” or “support a favorite software vendor” or “make decisions based upon my emotional reactions to certain things.” Usability, reliability and security must always be primary in my development process. Unusable software does not get used, or is used incorrectly, which means that the why is not being accomplished. Unreliable software loses data, wastes the users’ time, and often makes the users afraid to do anything (we have all used an application where we knew that certain operations would crash it out, so we ignored those features). Insecure software can destroy my user’s business, and the resulting lawsuits can put my employers out of business as well.

      Microsoft did exactly all of this for over twenty years: they put the development of new features ahead of perfecting existing features and producing reliable, secure code. It actually worked for them too. They achieved market domination. The end result though, was hundreds of millions of computers having severe vulnerabilities. Many of the features that helped Microsoft achieve business success (Office macros, IIS/ASP, ActiveX, and more) had holes in them so big you could drive a Mack truck through them. Microsoft has been the butt of endless jokes as a result. In response, Microsoft ceased development on a large numbers of new features to concentrate on fixing code, and has had to completely reengineer their development process to put reliability and security at the forefront of their efforts.

      What I find most interesting, is that this is exactly the path every other developer seems to be going down now! The “Web 2.0” crowd especially, but OSS is headed towards this direction as well. One project I keep an eye on has not released a production release in years, that I am aware of, but keeps cranking out “alpha” and “beta” releases. There never seems to be any actual final, production release of any version, but they run both minor and major versions through “beta” constantly. Google seems to have 75% of its products in “beta” at any given time, some of them on the market for years. What I hear in many comments and discussions is that “we need to make the best of what we have today.” No, we really do not. We need to write the best code possible today.

      Here is an example of what I mean: today, I made hotel reservations with a Hyatt hotel on their Web site. I went back to change my reservation, because I had made a minor mistake in the days. Their reservation lookup form allows me to enter the credit card number to look up my information if I do not have my credit card handy. Except there is one huge flaw: it does not use HTTPS! So not only is my credit card number being sent in plain text (along with my first and last name), but it is going to be stored in my browser’s auto-fill system in plain text! This is a failure on every level. All because someone did not think about what they were doing when they wrote code and no one gave their code a second review. All in the name of rolling out a feature to help the user. I am sorry, but I would have preferred to have to call customer support or find my confirmation number than to give out my credit card number over a plain text connection and have it stored in my browser.

      I find this very ironic. Just as Microsoft works to tighten down the screws as hard as they can, without breaking Windows’ backwards compatibility (and other features which help make it unreliable and insecure), Java starts growing its sandbox, developers begin to advocate putting out buggy releases and fixing them later, and the rest of the world in general transitions to releasing code quickly without proper review and testing.

      George Ou has discovered that that tens of thousands of defaced web sites had something in common: most of them had a common hosting solution, and the hack appears to have occurred through sloppy ASP code. Worst of all, the suspected security hole was known since April, 2005. Let me repeat that for extra clarity. A bad programmer possibly wrote bad code that was known to be bad for about fourteen full months without being fixed, leading to a massive security breach and thousands of defaced Web sites. If I were that developer I would be writing my will right now, buying cans of baked beans by the crate and making plans to hide in the mountains for the next century. If I were the company that employed this programmer, I would be calling John Edwards and asking him to contact the spirit of Johnnie Cochran for legal advice. It is that serious. Talk about negligence of a criminal nature! This is why I am such a hard case about these things. Do you want to be the systems administrator who did not patch the managed code environment that was known to be buggy, allowing hackers to destroy your business? Do you want to be the CIO of a company that outsourced their development to a third party vendor with no internal review of the code, allowing a bug to linger for years that caused millions of dollars to be redirected to some bank in the Bahamas? Do you want to be the programmer who did a copy/paste on some code from some website that contained a buffer overrun exploit, giving hackers root on your servers?

      Look at the why of some projects I have worked on: “selling products online, properly determining their sales, marking hazardous shipments in accordance to federal law.” Imagine what happens if I write buggy code for any of these projects. Credit card numbers could be stolen. A federal investigation could be launched into the company’s violation of SOX, or the incorrect payouts of millions of dollars in bonuses and other incentive compensation could occur. The company’s transportation network could be shut down by the federal government while they audit the HAZMAT controls. The code I write must be bug free. The lives of real people and millions of dollars are on the line.

      A short essay written by Professor Kurt Wiesenfeld entitled It Was a Rookie Error is something that I read during college, and I keep the ideas from this essay in mind whenever I write code. When programmers write bad code, people can die, companies can collapse, millions of dollars can disappear, or lives can be ruined. If this sounds overly dramatic, think again. Sure, you may only be working on a small personal Web site today. But if your small personal Web site gives root to a hacker, someone else’s Web site that may be much more sensitive could be compromised. You may just be writing a “quick and dirty” script, but “quick and dirty” code tends to linger forever and have real systems built on top of it. Your small driver for an obscure device may end up in a system for air traffic control; if it crashes the OS, people die.

      So the next time you think about releasing untested code, or code with known bugs for the sake of getting new feature in the face of users, think again. “Good enough” is never good enough in the real of real IT.

      J.Ja

      • #3165915

        Why I Do Not Support Short Release Cycles

        by jaqui ·

        In reply to Why I Do Not Support Short Release Cycles

        I am not a fan of rad myself, but I do favor having frequent minor version releases.

        By minor version I mean the .x.y.z type of thing, where you are adding the basic functionality, and want to have it tested as much as possible before getting a Major release version, such as 1.0
        This type of release pattern will allow for the most heavily tested and debugged application, making the major release versions that much more stable, with far fewer bugs or security issues.

      • #3165884

        Why I Do Not Support Short Release Cycles

        by karlg1 ·

        In reply to Why I Do Not Support Short Release Cycles

        The process of short-release cycles is known in our company as making ‘banana-ware’ – in other words, the product is green when we ship it and it ripens at the customers.

        For us this works – because the software is not the primary focus – the database information we ship and the reports that you can retrieve are what people want – if the software has the occasional glitch it will be patched asap, or very rarely when it lasts that long, fixed on the next release, which will be out every 3 months ( this ensures we cover our legal obligations to have up to date safety data). The software expires every 7 months. The continual getting ready for release cycle is supported by 5 testers and they are starting to automate this.

        Best bit about the short release cycle is that you only have to support bugs for at most 7 months.

      • #3165147

        Why I Do Not Support Short Release Cycles

        by duckboxxer ·

        In reply to Why I Do Not Support Short Release Cycles

        I understand what you are saying and I personally am not a fan of short release cycles either.  Especially after working for a company looking to reach CMMI level 3.  Short release cycles do not lend themselves to lots of documentation, generally ‘just enough’, if any.  Later on, when changes need to be made, since little time was put into analysis and design for solving application problems means that those solutions may not be the best later on in the development process (or in the next version).  

        Having little documentation means lots of potential issues later on.  New team members will take longer to get up to speed because there is little documentation for them to reference.  Or another team member has to sit and explain everything to them. When you are dealing with small teams to begin with, this becomes a larger issue. Think if there isn’t anyone to explain design issues to the developer that ‘inherits’ an application.  Also when it is time to make changes, if there isn’t much documentation, then developers could be wildly off in regards to their time estimates.  

        My question is though, as a developer, how does one break the cycle?  Sure I can convince managers to look at a phased approach, but based on the current project schedule, I’ve got 3 months (and luckily 2 other developers) to come out with a beta application.  I can see that they are getting a little grumpy with a long analysis/design period.  How do you tell them it is for their own good?  I’m not going to be here forever.  The next guy will need something to start from, but managers (unless they are recent developers) rarely seem to ‘get’ this.

      • #3165145

        Why I Do Not Support Short Release Cycles

        by mike page ·

        In reply to Why I Do Not Support Short Release Cycles

        I think Justin is correct in many respects.  Buggy software is
        regulary shipped, and we are in a constant patch cycle.  The
        ramifications are huge.  But, I do not think that the development
        methodology is the cause.  You can create bad software using a
        short release cycle as in Agile Programming as well as with a longer
        release cycle as in the Water Fall development cycle.

      • #3165119

        Why I Do Not Support Short Release Cycles

        by ke4ma ·

        In reply to Why I Do Not Support Short Release Cycles

        What you are describing isn’t RAD or Agile. Neither of these two methodologies support releasing buggy software, nor do they support short release schedules. What you are describing is sloppy programming and sloppy project management, no matter what the methodology used.

        I’ve worked on all sorts of software projects that used all sorts of project management. It doesn’t matter what techniques are used, if the project is poorly run the result is crappy software. Poorly run can be everything from being underfunded to setting unattainable goals. RAD and Agile were developed to combat this, and when used according to directions, do so very well.

        Neither RAD nor Agile dictate a short release cycle. You may confuse Agile’s short development cycle with a release cycle, but they aren’t the same. No one is forced to release his software before it is ready. In fact, if you do, then you ARE NOT following Agile principles.

      • #3165113

        Why I Do Not Support Short Release Cycles

        by fbuchan ·

        In reply to Why I Do Not Support Short Release Cycles

        The problem is real, but the solution is not going to come from the frontline developers, who rarely have the authority to stall the demands of their masters even when they have the skills. The problem is that expectations of businesses (and managers) exceed the real possibilities of clean development on a compressed timeline. Any time a business is managed by the quarterly balance sheet, the problem will be that anything more than quarter-long project is impossible for the managers to wrap their minds around. Regardless of all good intentions, no developer can expect to change that management style, and most software development is not going to respond effectively to the demand of ROI by the quarter.
         
        As for methodology being the culprit? Any methodology can be abused, and blaming RAD methodologies for the problems of buggy software (or blaming short cycle releases) is nonsensical. I have seen long-form methods that generate thousands of pages of inscrutable documentation fail as certainly, and they usually fail for reasons that are harder to qualify because the documentation is generally bad. Good developers exceed the limitations of their methodology; bad ones degrade the value of their methodology — no specific methodology is better or worse. Documentation only has value when it is focused, correct for the project, and communicative — most I have seen is not.
         
        Software development used to be a creative enterprise, where those doing it had real passion for the work. Having been kicking around 20+ years, and actually making development my primary source of income that entire time, I have watched the field reduced by an influx of people who look at it as a money-making opportunity first and foremost. When they find out how much actual work it is to do it well, many of them simply enter the “get-by” mode of operation. This attitude is more the cause of the degradation of software quality than anything else, because without passion the process of creativity behind the development of software becomes mechanical. Problem-solving is rarely mechanical, and never effectively so when timelines are as compressed as real developers face daily. But blaming this lack of passion on the methods of choice on is arsy-versy: the problem is that the weakest developers are choosing methodologies that demand the least of them, but they would do as poorly in tougher frameworks.
         
        The development community doesn’t need more methodologies, should stop pursuing the pipe-dream that it is an “engineering” undertaking, and should focus on its actual value-proposition. With good, passionate developers, software provides quality tools, regardless of cycle, method, or timelines. Craftsmanship — almost unknown to the modern world — counts in the field.
      • #3165096

        Why I Do Not Support Short Release Cycles

        by snoopdoug ·

        In reply to Why I Do Not Support Short Release Cycles

        All code has bugs. The dilemna is how do we know whether we’ve found the crucial bugs. In non-trivial code you have too many code paths to exhaustively test each path, so you use beta releases to verify whether you have fixed all non-crucial bugs.

        All development teams, even Microsoft’s, have limited resources. It is not a “code with known bugs for the sake of getting new feature in the face of users”, it’s “which bugs are serious enough to not put new features in the face of users”.

        As to “rapid application development, agile programming
        techniques, “test in production,” perpetual betas, the continuous patch
        cycle, and other current programming trends”, in which methodology is the mantra “ship really bad code quickly”.

        Regardless of how you develop code, all code contains bugs. The idea behind many of the new techniques is to get a working version of the product out in front of users quickly so you can discover whether or not you are building a solution that works for them. That and building the right features into the product, rather than features they never use. That and find bugs and fix them as early as possible. What’s wrong with that?

        Which development model did Hyatt use? How do you know whether it was agile or waterfall or “throw it over the wall to an outside consulting company and pray”? What the h*ll does http/https have to do with the development model? I see no connection. How did this simple mistake get by code reviews? Oh wait, I’ll bet there were no code reviews. How is that the fault of developers? It is the fault of development management. Most of the mistakes you blame on developers are the result of pressure from above to “just do it”. Management is who won’t let us attend seminars on creating secure Web sites. Management is who wants us to ship buggy code. Management is who insists on getting out crappy beta after crappy beta.

        All I see is guilt by association–current code must have been written in a current development model.

        Here’s my take: upper management did not allow the product to “bake”. As problems were discovered, developers were instructed to “code around them” rather than fix the underlying cause. I’ll bet they were instructed to base their Web software on Windows/ASP from the get-go, not from any cost/benefit analysis.

        You are barking up the wrong tree. Developers are not hot to ship buggy code. That decision is not theirs.

        doug in Seattle

      • #3165087

        Why I Do Not Support Short Release Cycles

        by hemal_sanghvi ·

        In reply to Why I Do Not Support Short Release Cycles

        There are two aspects intertwined here. One is the quality of the code itself. As one post indicated the discussion seems to be toward sloppy programming. The other aspect of rapid releases is “quick to market” products. By releasing rapidly, one can incrementally add functionality to a product, make it available to the customer and start seeking very valudable customer feedback, to futher enhance the product. This is very important for customer facing applications. Some organizations have a tendency to create a huge release and jam all the functionality that they can think of. The fact remains that most of that functionality is hardly used by their customers.

      • #3165490

        Why I Do Not Support Short Release Cycles

        by stephen.lee ·

        In reply to Why I Do Not Support Short Release Cycles

        Excellent posting, Justin.

        I haven’t seriously drilled into the Agile Programming methodology yet, so I’m not in a position to make a serious comment about it.  In general though, if a methodology, programming technique, whatever, raises even the suggestion of “short cuts” to coding, questions must be asked. If it can be used as a crutch by lazy coders to legitimise their lack of good practice under the banner of “It’s so hot right now”, questions must be asked.  If un-tested code is going to find it’s way out into userland and be viewed as even remotely “production – ready”, questions must be asked.

        For my money, coding has always been about one key concept: Quality.  I think Justin describes it as “craftsmanship”.  The professional coder is always aware that, no matter who s/he happens to be working for at the time, their code is inextricably linked to their name as a programmer.  If, over the space of say 2 years, the code constantly has to be patched for serious flaws that could have been avoided with some proper review and testing, you can bet your life those with the job of maintaining that code will be asking “well who wrote this in the first place?” On the other hand, if over that same 2 year period their code has performed admirably and withstood even unexpected challenges, those same people are going to be wanting him/her back again to help with the next major release, and will no doubt be willing to pay accordingly.

        Amongst other things, a good methodology enforces documentation, rigour and order on a project, and these are obviously good things.  Coders, however, should never overlook their personal responsibility to develop code that, to the best of their knowledge, fits the bill.  This principle will never change, no matter which methodology is “so hot right now”.

        Stephen

      • #3144885

        Why I Do Not Support Short Release Cycles

        by duckboxxer ·

        In reply to Why I Do Not Support Short Release Cycles

        I agree FBuchan, the reason I only have 3 months to get out a beta application, is end of fiscal year.  That is the driving factor.  This will be a large application, and honestly I think we need a little bit longer, but that aspect of the schedule is not up to me, the developer, the team lead.  

      • #3141453

        Why I Do Not Support Short Release Cycles

        by gardoglee ·

        In reply to Why I Do Not Support Short Release Cycles

        As a few others have mentioned, the problem is not the methodology used for building the code, it is the objectives of the managers directing the development effort.  If the people at the top perceive that they will be at serious risk if a bad version gets out the door, then the emphasis will be on bug-free products.  The current evidence is the turning of Microsoft.  It remains to be seen whether anyone else in the industry will feel similarly threatened by a market perception of bad quality product.  When the marketing department or the internal user decides that new features are more important than quality, the pressure on the development team will be to produce to timelines, not to quality measures.  Organizational controls are all based on a simple idea.  Do what we want and we will reward you, do what we don’t want and we will reward your successor.  The technical term for a developer, tester, manager, architect or business analyst who bucks the system is ‘unemployed’.

        Waterfall, RAD, or Whatchamajigit Development Paradigm can all produce good code or bad code, and usually a combination of the two.  A development team which schedules for design and code reviews, walkthroughs, test plan development and execution, configuration management, release management and all of the other useful quality processes, and then is overridden by project management to discard the ‘unnecessary’ parts in order to meet deadlines, reduce the development budget, fit into the cost model, and make up for other project delays, cannot help but get poor results.  It doesn’t do developers any good to have a development methodology which is routinely overridden or ignored outright, but far too many companies have a wonderfully robust methodology for display purposes only.

        The idea behind too many methodologies, certifications, CMM guidelines and other gimmicks is that if you have some sort of process written down, then you cannot blame anyone for the quality of the end product.  Processes and procedures are worthless if they are ignored in the real world of deadlines and budgets, or if they are blindly applied without thought.  Which methodology you choose to abuse is inconsequential.

    • #3166332

      Will Skype Get You Fired?

      by justin james ·

      In reply to Critical Thinking

      I am very grateful that I am no longer at my last, super corporate job, and that at my current job, I run the IT systems. Why? Because Skype probably would have gotten me fired from my last job. Skype, like every other major instant messaging client, has a conversation history feature. Unlike the others, Skype has a synchronization system. For example, if I am at work at talk to someone via Skype, and then get home and they are logged on from that computer, it will dump the previous chat session’s contents to my home PC.

      This is all well and good. Rather obnoxious (when the other person signs on, you suddenly get dumped an entire conversation all at once, and it looks like a new message), but harmless. Or so I thought.

      Just a few minutes ago, someone I talk to frequently via Skype signed on, using the same PC he was using last night when I talked to him. The contents of last night’s conversation (when we were both at home) got dumped to my screen. Some of the language was not “work safe” and some of the contents were not “work safe.” Certainly, much of the conversation was something that, even if it did not get me fired, would certainly have me blush if my boss ever confronted me with it.

      We do not do any kind of Web/Internet monitoring at my current company. If we did, I would be the one doing it any ways. My previous employer was rather different. All IMs went through a proxy which performed logging, and each manager had full access to the logs of their direct reports. In other words, had this happened at my previous employer, my manager would have been able to read the full contents of that conversation. Even worse, it would have appeared to occur during working hours as the proxy would have timestamped it.

      So, if you work for an employer that might be (or definitely is) performing any type of Internet monitoring, I highly recommend that you either avoid Skype entirely as an IM client and use it strictly for its VoIP features, or disable to conversation logging functionality to prevent your “non work safe” conversations from outside the office from leaking into the office. This is yet another example of a new feature that its creators did not consider the implementation of or consequences of before they implemented it.

      J.Ja

    • #3165527

      Why Does IT Get So Little Support

      by justin james ·

      In reply to Critical Thinking

      Some time ago, Joel Spolsky ("Joel on Software") wrote a good little piece entitles "The Development Abstraction Layer." As always, Joel was right on the money. The last few weeks, I have been trying to arrange training to go to training for Cognos ReportNet, at the request of one of our clients. Arranging the training has been an amazing waste of time. I spent hours on the phone with Cognos, hours searching and filtering and refining course searches, hours trying to make travel arrangements; in short, the kind of things that Joel says should be abstracted out of the development process.

      It is not that I feel that this level of work was demeaning. If anything, it was refreshing to not have my head buried in a compiler or an Excel spreadsheet all day long. But all things considered, an administrative assistant could have handled the training arrangements just as competently as I did (maybe even better) at a significantly reduced cost. This is not the first time that this has happened at the company I work for. My current employer is a five person company, and we all have our separate responsibilities and areas of specialization. Whenever some non-core competency item comes up (accounts payable, accounts receivable, travel arrangements, going to the Post Office, etc.) we have to take time away from a project and cease incurring billable hours to do work that an administrative assistant could do just as well if not better.

      Even at large companies, this is standard. At every job I have ever had, administrative assistants and other support staff simply did not exist for the IT staff. It seems like every other department had support, except for IT. Ironically, IT professionals (even after the dot-com crash and the offshoring craze) make more than Marketing, Public Relations, Accounting, and all of the other departments. Yet it is still standard to have IT professionals waste their time doing things that have nothing to do with their jobs. I am simply baffled at the attitude that says that spending the time of someone who earns $70,000 per year on trying to order stationary instead of developing software. The modern enterprise is filled with an extraordinary amount of layers. Simply ordering pens can take an hour. Travel arrangements take days to set up through the corporate travel department. Purchasing something from an unapproved vendor or software not on the corporate installation list is nightmarish at best (and IT professionals typically need at lot of non-standard software and hardware). Yet, IT professionals are asked to spend their time doing this. At some large companies I have worked for, the kinds of tasks have frequently consumed 5% of my work time on average over the span of my employment. A 5% reduction in productivity, multiplied by the cost of an IT department is a huge hole in the budget. But this is considered acceptable.

      I do not consider this acceptable. I consider it bad business. If you could buy a product from one store for $50 and at a different store for $20, you would get it from the cheaper store, right? So why are companies forcing high priced IT professionals to do the same work that a high school graduate could be doing at $10 per hour? Is it simply prejudice against IT? The mistaken belief that IT professionals do not do anything that is not their core mission? Or something else?

      I would love to hear your feedback about this problem.

      J.Ja

      • #3165500

        Why Does IT Get So Little Support

        by rickhartusa ·

        In reply to Why Does IT Get So Little Support

        This is old news…has been in the 25 plus years I have been in IT; even more so since the dot-com and telecom bust of 2000-01. Upper management and pencil pushers cut all support staff positions to the bone in their first attempts at preserving profits, “the numbers”, and Wall Street estimates and their bonuses that go along with those expectations. Bad business or not, trust me, your current management team does not care how much time away from productive work you take for administrative duties as long as both are completed.

        The best way you can stay enthused and envigorated is to keep yourself current on technology and trends, keep yourself trained in new languages and services, network yourself through local user groups, professional organizations, and volunteer groups, and always keep an updated resume and reference list handy and current. As your current employer is piling on the administrative, as well as, the professional IT position responsibilities, one of two possibilities exist – you burnout or you blowup – either way you end up leaving. Unless they suddenly see the light (yuk,yuk)good luck in your search:-)

        Best wishes,
        Rick

      • #3164330

        Why Does IT Get So Little Support

        by mark miller ·

        In reply to Why Does IT Get So Little Support

        I haven’t run into the bureacratic stuff as you have, but I have seen wasteful stuff done inside software projects. One place where I worked, we wrote everything except for the low-level infrastructure. We could’ve used the built-in replication services of our RDBMS, but they had me take time to write a custom but simple data replication mechanism. We could’ve used ODBC to handle database functions for our Windows client, but instead we spent many hours writing our own database wrapper. We had a few projects where doing joins in our database access would have been more code-efficient, but we couldn’t because our in-house wrapper didn’t support it, and we used it for every project. We discussed changing the wrapper to support joins, but we discovered we’d have to basically rewrite it to put in the support. Not wanting to go that route, we just slogged out the code, writing what were essentially logical joins in the code for each application.

        The positive side of this was it gave us control of some of our infrastructure. We didn’t have to try and shoehorn a prepackaged solution into a platform that couldn’t handle it. We could customize it to the platform. Part of our product was a handheld wireless tablet or data collection device, which had limited CPU, memory, and data storage capabilities.

        The downside of it was, IMO, they spent tens of thousands of dollars for us to custom-write capabilities that we could either buy for anywhere from a few to several hundred dollars, royalty-free, or were already included in the software we were deploying with. What my higher ups objected to was that the included capabilities were either not performant enough in their opinion, or were wasteful because they included a lot of bells and whistles we didn’t use. I figured in terms of economics they were not necessarily making the best choices. Hardware is cheaper than developers. If the solution is wasteful but is cheaper on developer time, then prudence says go with the cheaper solution rather than creating something that is technically elegant but more expensive.

        I’m not saying writing custom infrastructure is never a good choice, but I think we were a kind of johnny-one-note. No matter what the problem was, my bosses always had a preference for us to write it rather than buy some components, or use what was built in, to do the job for us.

      • #3144903

        Why Does IT Get So Little Support

        by duckboxxer ·

        In reply to Why Does IT Get So Little Support

        I completely see where you are coming from on this issue.  For years, IT has been the red-headed stepchild of the business world.  But the funny thing is, no one realizes how critical we are to a company’s survival.  If a server goes down, we don’t get to say, “oh, I’ll work on that tomorrow, I’ve got a dinner date”.  But yes, we do all of our own ‘mundane’ tasks that other departments have junior team member or assistants designated for.  Unfortunately, we’ve been this way for a long time and the stereotypical IT person doesn’t rise up against a business’s structure.  

        This is why I am terribly big on interns.  IT departments don’t get assistants.  Generally interns are there to learn some technical skill, but they are also cheap enough such that other tasks can be ‘dumped off’ onto their plate. I have seen it be easier to get an intern hired under the guise of training new technical help, but in reality, they are there to take care of simpler tasks, to leave IT workers for the more billable tasks.  

    • #3144556

      Usability: The Feature To Rule Them All

      by justin james ·

      In reply to Critical Thinking

      I have been re-reading some of my blog posts, and I noticed just how often I mention usability. I consider usability to be the most important feature a piece of software (or hardware, for that matter) can have, right next to reliability and security. Anything else is less important. The case for reliability and security is well established. Usability has been proven to be a vital feature as well, but for whatever reason, programmers do not seem to care for it too much. And when they do, they seem to frequently fail at getting it right.

      Why is usability so important? Cell phones are a great example. A huge portion (40% is a number I read recently) of cell phones users do not use any feature of their phone except basic voice calls. Each user that only makes basic calls is a lost revenue opportunity for the cell phone provider. These users, by and large, are not using the advanced features, because they cannot figure out how to use them. Indeed, cell phones are so hard to use, I recently read that around 25% of cell phones users do not even use the address book, a feature that has been available for over a decade on cell phones. That is ten years to get a feature right, and it is still not usable by a whopping quarter of users.

      If I opened a store, and 25% of my customers could not figure out how to open the door, I would be out of business. If I opened a hamburger stand, and 40% of my customers found ordering fries or a drink with their basic hamburger to be more work than it was worth, I would be out of business.

      Software is just as bad. Microsoft Office is the extreme example of this. Most of its innovative or high powered features are buried in menus, non-standard toolbars, or otherwise hidden from the user. A user could be doing something manually for years before discovering that Office had a feature to do the exact same thing already built it.

      A significant portion of computer users will not experiment above and beyond a single click to accomplish their goals. These users will not drag/drop, will not right click, will not double click, and certainly will never learn a keyboard shortcut. These are the users we see helplessly scanning the menus to perform even the most basic tasks. As a result, some programmers choose to dumb down their software to the point where it is nearly impossible to do anything faster than the speed of the menu system.

      At the other end of the spectrum, there is software that is simply impossible to use efficiently, no matter how long you have been using it. Sadly, Microsoft Office also falls under this category. Although it is feature packed, there are few keyboard shortcuts to assist in the work. Even worse, using its advanced features is cumbersome at best. The auto formatting and other things it does to help the newcomer hold the advanced user back, forcing the document into undesired formats.

      Usability is indeed a fine line; expose too much advanced functionality upfront, or force users to work in uncomfortable ways, and they develop a fear of the product. Making the software too easy to use holds people back from using your product to the fullest. But usability needs to be a top concern for anyone making any kind of interface, whether it be hardware or software, or even something as mundane as a common phone. Without usability, people will not use your product to the fullest, leading them to get little value from it and to not buy it. Poor usability nearly always results in poor market results (there are a few exceptions to this, like MySpace). Great usability almost always results in better market performance (the Macintosh is the exception to this rule). Google’s stunning rise to the top of the Web search market probably would have happened even if they were delivering the exact same results that MSN or Yahoo! was delivering, just because Google’s interface is superior to their competitors’ interfaces in every measurable way (note that Google employs zero JavaScript for their core search engine, let alone Flash, AJAX, spinning banners and dancing bears). You can ignore usability, but do so at your own risk.

      J.Ja

      • #3144890

        Usability: The Feature To Rule Them All

        by fbuchan ·

        In reply to Usability: The Feature To Rule Them All

        100% agreed about the importance of usability, but uncertain there is much that really helps the user group who simply can’t use even the simplest software, which is the majority out there. Part of the problem is we pretend (even within the industry) that software is as simple as a toaster, and so people assume they require virtually no insight or training to use it. Until we start to look at software as expert tools are viewed, we will never break past the barrier raised when users simply believe no training is required.

      • #3144888

        Usability: The Feature To Rule Them All

        by duckboxxer ·

        In reply to Usability: The Feature To Rule Them All

        I think part of this is the nature of the programming industry itself.  You have a lot of development shops that just employ programmers.  They write code; they generally don’t make pretty graphics and QA testing doesn’t really exist.  It works, and if it doesn’t make sense to you, well you just need to think like a programmer (because it isn’t like there’s a lot of documentation).  Programmers generally like to do the new, cool thing in their applications.  That is how you keep them entertained and from going nuts (stereotypically).  Also education wise, graphic design, usability and QA type courses don’t exist or are optional, for those of us that went through the education track.  

        To fix this?  I believe that we have to change the business requirements.  People have to demand better software and be willing to wait for it.  The ‘I need it yesterday’ mentality has to change so that users may still want it yesterday, but also be patient enough to wait for easier to use software.  From there, the education system has to change to require these type courses or offer affordable usability training.  Also businesses will need to be willing to hire QA and designers/ usability experts.

      • #3141539

        Usability: The Feature To Rule Them All

        by rmrenneboog ·

        In reply to Usability: The Feature To Rule Them All

        Justin, your comments and observations about usability extend well beyond the scope of software and web design. When I read your post, it struck me how well those same observations apply to many other areas. As the coordinator of a project addressing accessibility issues for persons with learning disabilities, I can say that your article, with few changes of terminology, could equally apply to business operations, education systems, and services in general. It can describe not just a typical software user, but someone working on an auto parts factory production floor, or a student sitting quietly in a classroom. I hope you will not be offended if I use some of your ideas, and some of your words, in an article for the “LD Edge Newsletter”. Please feel free to look into our project at http://www.ntl-london.on.ca, where you will also find all previous issues of that newsletter as well as any contact information you may need. Thank you.

      • #3155501

        Usability: The Feature To Rule Them All

        by flash00 ·

        In reply to Usability: The Feature To Rule Them All

        I disagree that users who can’t find the features hidden in software are just being stupid and lazy.  Take key combinations for instance.  Seems like every coder puts them in but fails to document them, so the user has no idea they’re there and can’t find a list of them to choose from.  Most people don’t have the time to try every possible combination of keystroke to see if it does something, and if they did, how could they know that what that particular key combination seemed to do is what it really does?  Then of course they’d have to remember the ones that do something.  See what I mean?

        I once read that the first thing that should be written for any software project is the operator’s manual.  I’d say that was good advice.

      • #3270680

        Usability: The Feature To Rule Them All

        by n.vanrijnberk ·

        In reply to Usability: The Feature To Rule Them All

        Just some thoughts:

        The key succes factor of an application are users.

        Users want ot see the software work for them. But users also come in different computer experience/skills and learning.

        How do you mamage these skills and ways of learning?

        For some users a manual will help. Rermember a manual is an “user interface’ too. My experience is that the gross of our users doesn’t read it (RTFM) .A manual is one way but a lot of people need diferent ways to learn and to remember.

        A fool with a tool is still a fool. my reply:; so educate the fool.

        Nando van Rijnberk

    • #3145540

      Does Cross Platform Programming Matter?

      by justin james ·

      In reply to Critical Thinking

      Recently, there was a very interesting and heated discussion on TechRepublic concerning what the best programming language for cross platform development is. I weighed in, of course (my vote went to Perl and standard C/C++). Afterwards, I began to ask myself, “does cross platform development even matter?” The answer to this is “maybe,” and “yes.”

      Where cross platform development might not matter much is in the desktop application space. Most large enterprises have standardized on the Windows platform. If you are a Windows application developer, you can sell your product over 90% of the desktop users out there. As frequent readers know, I am not fond of the idea of locking yourself out of a market. Obviously, if you are writing a Windows system utility, UNIX and Macintosh users are not important to your strategy. But if you are looking to write a general purpose application, you may need to ask yourself if losing 5% – 10% of potential sales outweighs the efforts of writing your program in a cross platform manner. Even in the cross platform space, there are pure desktop applications, where the data is stored in files, and client/server applications that require some sort communications between the desktop application and a backend server of some sort.

      Cross platform development definitely matters in the server application space, though. Windows does not command the huge market share in the data room that it does on the desktop. It is very difficult indeed to justify being able to sell your product to 40% – 60% of the potential market. This is one reason why I am not a fan of using either .Net or Java on the backend. .Net, unless you are using the portion of the .Net Framework that Mono supports, is hardly cross platform. And most of .Net’s advantages stem from its tight integration with Windows. I still do not trust Java’s cross platform portability. I have simply had too many problems with it in the past. Many of the portability problems I have had with Java stem not just from the JVM, but from the application server itself. Targeting a particular application server cuts you off from potential sales even worse than targeting a particular operating system. And using only standard Java reduces its usefulness on the backend significantly.

      Web applications require still more cross platform considerations. Internet Explorer on Windows no longer holds the 90% of the market like it used to. A few years ago, you could target a Web site towards IE on Windows, and allow for it to gracefully degrade for UNIX, Apple, Netscape/Mozilla, Safari, Opera, and so on. This is no longer the case. The front end needs to work just as well over various browsers running on a multitude of operating systems. And the back end is subject to the same OS considerations as other server applications, with even more problems: you cannot rely upon a particular Web server being installed (let alone a particular application server). Even worse is depending upon specific backend technologies such as Windows Indexing Service or a particular vendor’s RDBMS.

      Taking all of this into consideration, we have the following grid of application varieties:

      Application Type

      Dominant Environment

      Market Share of Dominant Environment

      Desktop Applications

      Microsoft Windows

      90%+

      Server Applications

      UNIX

      ~50%

      Web Applications (front end)

      Windows + IE 6

      75%

      Web Applications (back end)

      UNIX + Apache

      60%

      Looking at the grid, we can see that there are some tough choices to be made. Even when writing a desktop application, if that application requires communications with a server, the server backend will probably need to be written in a cross platform manner, even if the client portion is strictly Windows only. With Web applications, the choice becomes even more tricky: you have to account for a wide variety of Web browsers on the front end, while simultaneously writing the backend components to be used in many different environments.

      It is also very difficult to choose between the headaches and hassles of writing cross platform code versus non-portable code. With Web applications, if you follow my “running water Web design” theory (to be discussed in an upcoming blog), your only cross platform concerns occur on the backend, reducing the number of platform combinations that you need to support. If you are writing a desktop application, your choices are really limited to the following development strategies:

      * Java

      * A cross platform language such as C/C++ to be compiled into libraries and wrapped in a platform-specific front end such as the Windows API, X, .Net, etc.
      * Something such as Tk which allows cross platform GUI design

      On the backend, you need to decide how much advantage leveraging server specific functions can give you as opposed to writing generic code in Java, C# (using Mono), C/C++, Perl, or some other highly portable language.

      Unless you are willing to sacrifice potential sales, you are really forced to be writing your code in a cross platform manner. As the market shows, most desktop application developers are willing to give up about 10% of their sales in order to target Windows systems; only the truly large projects (or popular OSS projects) get aimed at multiple platforms on the desktop. The backend story is much murkier. If you have a high profit/low volume sales strategy, targeting a particular OS or combination of OS, Web server, and application server is a viable strategy. You cannot ignore cross platform code if you are shooting for a low profit/high volume sales strategy, particularly in a commodity market.

      J.Ja

      • #3164565

        Does Cross Platform Programming Matter?

        by mark miller ·

        In reply to Does Cross Platform Programming Matter?

        Maybe your title should’ve been different? It sounds like you answered your own question, unless you’re shooting for a high profit/low volume sales strategy, like you said. You’re obviously thinking of a mass market product, I assume for business customers. Since you’re basically taking .Net and Java out of consideration as cross-platform back end languages, what about PHP or Ruby on Rails? Since they’re script languages I imagine they are cross platform, and they run on Apache, which runs on Windows, Unix, and Linux. IMO C++ and Perl are old solutions. It’s what many web shops used during the 90s. I’ve done web development on .Net and one thing I can say is “thank goodness for garbage collection!” I can’t imagine having to manage memory in that kind of environment with the kind of schedules I’ve had. I really need to focus on the business problem, getting the interface to show up right, etc. One thing that’s nice with PHP is that there are some IDE environments for it. I’d imagine that the development environments for standard C++ and Perl on Unix are primitive–command-line based. Is there even a debugger for Perl? Am I off base in even asking that?

        As for the database, there are lots of choices: Oracle, Postgres, MySQL

        As far as the web client is concerned, use HTML/CSS. As far as anything more sophisticated than that, use Javascript and test it for both IE and Firefox, or use Flash. It seems like the decision for what to use on the client is much easier than what to use on the server.

        I can really only make arguments on technical considerations. As far as the market arguments I’m totally clueless. I’ll leave that for others to answer.

      • #3271123

        Does Cross Platform Programming Matter?

        by justin james ·

        In reply to Does Cross Platform Programming Matter?

        Mark –

        I am currently working on a follow up piece regarding the technical merits of various cross platform programming techniques and languages. 🙂

        You are right though, C/C++ on the backend is extremely not fun, and Perl is no joy either. There are indeed IDEs and GUI debuggers for Perl, but my xperience has been that they are not nearly as full featured as Visual Studio or various Java IDEs. But that is the trade off: great performance, or easy easy development. Until someone makes a “fill in the blanks” C/C++ IDE that does writes all of the memory management and other assorted uncool tasks, C/C++ will not be fun or easy. Even then, would you trust that automagically generated code?

        In any event, in a day or two you should see that next article.

        Thanks for the feedback, as always!

        J.Ja

      • #3155405

        Does Cross Platform Programming Matter?

        by victor ·

        In reply to Does Cross Platform Programming Matter?

        There are so many options to write cross platform code these days that it simply doesn’t make much sense not to do that. Still people don’t do it… Why? Well, it’s same issue that happened back in 95 with German developers… In those days, Germany had a great social security system in place so when software world moved from DOS development to Windows, many old timers decided to simply retire. Same thing is happening these days with web applications. They just don’t seem capable to understand how that work and refuse to learn it.

        Same happens with cross platform development. Although Eclipse is only slightly less productive than VS .NET and a good PHP developer only slightly less productive than an ASP .NET one, people still prefer to write for Windows only and lose marketshare. There are people simply not capable to understand that ASP .NET/ MS SQL Server can go easy to more than 8000 USD in licenses for one server alone whereas a PHP/MySQL solution is only 100 USD in server setup… Not to mention that I didnt notice ASP .NET overall development costs to be significantly lower than PHP… That in practice, lets not talk theory.

        I totally agree with you that serious solutions cannot go without C/C++ in critical places. All game developers are optimizing the assembly code just to draw several triangles faster, but business solutions developers cannot pay attention to optimize code that moves around incredible amounts of money. What causes that? Well… stupidity, comoditization way beyond natural limits, lack of skills. Thats what hurt our industry lately.

      • #3155368

        Does Cross Platform Programming Matter?

        by mark miller ·

        In reply to Does Cross Platform Programming Matter?

        To Victor:

        Well you can count me as one of those people who is resistant. I’m fairly open minded but I hate going backwards in developer productivity just for the sake of trying to stay on the bleeding edge. What I’ve learned as I’ve read more is that in order on the bleeding edge, you have to constantly be in a position of writing a lot of plumbing code yourself, and dealing with tools that are spartan in one way or another.

        I’ve done cross platform development before, and it’s as Justin describes it. It hasn’t been that fun. You end up having to conform to the lowest common denominator that will work on all of your target platforms, and you can’t take advantage of more advanced libraries, which help you out a lot, because they tend to be platform specific. It’s like you know that tools that will make you more productive exist, but you just can’t use them. It’s frustrating after a while. I chose to go platform-specific for the benefits of productivity. Like I said, I’ve heard that IDEs exist for PHP. They may be worth checking out. Personally I’d like to see a dev. tool that integrates web UI development with a code editor, plus code debugging. That’s the reason I like VS.

        I came from coding in C/C++, working on natively compiled transaction server software on Unix, with only standard libraries for support, to working on GUI apps. on Windows, to writing web apps. in .Net. I think I can adjust, but don’t ask me to go back to command line-based tools, and clunky tools to do the majority of my work. Been there, done that. I’m not going back!

      • #3270074

        Does Cross Platform Programming Matter?

        by john.madden ·

        In reply to Does Cross Platform Programming Matter?

        I agree with you that C/C++ written into libraries might be the way to go. Sure, it’s a longer development cycle, but in the end the code tends to be faster and more efficient. Used with STL, the C variants port well and are not difficult to write in (I like the STL data structures better than those built into the .NET languages and Java) and, more importantly, transcend programming trends. Additionally, the newer C++ standards coming out should add garbage collection as an option to the language support. 

      • #3154735

        Does Cross Platform Programming Matter?

        by jaqui ·

        In reply to Does Cross Platform Programming Matter?

        Just a couple of points to make on people’s comments.

        1. Flash, not always available, so not a good cross platform web interface
        2. Javascript, security nightmare, that should be disabled on all browsers by now, unless the end user is a complete moron and is ignoring what security professionals have been saying for 2 years.
        3. .Net/Mono, not supported on most non windows machines.
        4. Java, the worst option for cross platform development.

        Linux does have full GUI IDE’s for all supported languages, but it does not have, nor does there seem to be any attempt to create, a code generator to create bloatware code like MS VS does.

        I personally will use C, C++, Objective C, PERL, PHP, or Python for cross platform scripting.

        Dynamic content websites will always output W3C standards, which at this time is XHTML and CSS.
        I will never use any clientside scripting such as flash or javascript.

        For a Workstation application, the options for cross platform in C and C++ are far more than TK, add WxWidgets, QT, GTK/GDK to the list.
        Eclipse is out of the running, it is java based.

    • #3154819

      Some Cross Platform Development Suggestions

      by justin james ·

      In reply to Critical Thinking

      In my most recent blog, I took a look at whether or not you may need to consider writing your software in a cross platform portable fashion. In this blog, I will examine the various options for writing cross platform portable code. There are two ends to writing cross platform code: the interface, and the processing portion of the code. In a pure desktop application, they are contained within the same set of code. The front end is almost always some sort of GUI, or occasionally a command line. A server application tends to take input from a network socket, and produces output to another network socket, a database, a file, or a combination of the three. In a client/server situation (including dynamic Web sites), the front end communicates with a backend server application (ultimately via a network socket, but sometimes not directly) which in turn produces output which arrives over a network socket (again, not always directly).

      No matter what type of application you are writing, you only have a few options in terms of how to go about writing cross platform code. Your first choice is to separate your presentation code from your logic, write the presentation in platform-specific code, and reference a library that you wrote to be portable. Your other choice is to use a language and interface toolkit that is portable.

      Whichever route you take, part of your program will need to be written in a portable language. This is where the truly tough decisions come into play. Look back on what languages I consider to be highly portable: Perl and standard C/C++. These languages have their roots in the UNIX world, where portability is a concern. Perl has the advantage of being a high level, interpreted language; the language does not allow you to deal with anything that may be system-specific, and the interpreter handles any incompatibilities for you. Cross platform compatible C/C++, unfortunately, is rather limited in what you can and cannot do without delving into area where you will break its portability. Perl is a very good language to be working with, in and of itself, but it does not have the support of application servers like Java or the .Net languages have. C/C++ is just downright not nice to work with. Both Perl and C/C++ completely blow Java and .Net out of the water in terms of speed. I have not worked with Ruby, but knowing its properties, I think that it is fair to assume that it is also significantly faster to execute than Java, and has no cross platform compatibility problems either. There are also less popular (but still viable) alternatives such as Python, Lisp, O’Caml, and more.

      What is a programmer to do?

      I think that for a smaller project, it is possible to manage the incompatibilities amongst various JVM’s and avoid application server specific code and use Java. For a larger project, I need to question this approach. For a large project, the current trend seems to be to just throw hardware at the speed issue. The common logic is, “hardware is cheaper than developers.” I do not believe this to be true. Is it truly less expensive to maintain 500 servers than 400? The power bill alone is a small mint. Additionally, this logic assumes that development in Java or .Net is so significantly faster and easier than development in C/C++, Perl, Ruby, or whatever. When you are looking specifically at Web applications, Ruby on Rails and PHP also come into play. Personally, I am not a fan of PHP. I think development in it is downright miserable. But PHP does have better performance than Java, from what I have read. I cannot judge RoR at all, since I have not personally worked with it.

      If you are working on a large application, it is impossible to ignore the performance factor. Hardware costs are ongoing; a well written application gets written once, and after the initial development, support and maintenance and new feature costs are a fraction of what the initial development costs were. It is my contention that for a large Web based application it is less expensive to spend a little bit more time, effort, and money working with Perl, Ruby, C/C++, or some other highly portable language in a CGI environment than to write slower, less efficient code in Java or .Net.

      In addition, the advantages of Java or .Net quickly disappear when you need to start writing routines or functions outside of the core libraries. Perl, C/C++, and Lisp all have excellent support for a wide variety of scientific functions, AI, text parsing, and many other less mainstream applications. For a variety of factors, Java and .Net do not have these kinds of libraries, and the languages themselves do not lend themselves well to this type of development. If your application goes beyond basic C/R/U/D functionality, then your secret sauce is in the backend logic. Backend logic in Perl tends to take about 75% less lines of code than the equivalent logic in Java, VB, or C#, because Perl is a much more powerful language. This holds true for any functional programming language, in my experience. As soon as your program goes beyond gluing various libraries together with small dabs of code, the deficiencies of Java and .Net quickly become obvious. If your differentiating factor is going to come from advanced logic or high performance, Java or .Net just will not cut the mustard.

      All said and done, writing cross platform coding is always a juggling act. There are no hard and fast rules to follow. How you decide to go about it is entirely dependent upon your needs. Java is a good choice for projects that do not need to scale much or require much business logic, but as the operational demands placed upon the software increase, and/or the amount of business logic increases, it is my belief that the additional effort of coding in Perl, C/C++, or a similar language more than pays for itself in terms of reduced hardware needs, faster execution, and less effort needed to write complex logic.

      J.Ja

      • #3270561

        Some Cross Platform Development Suggestions

        by jslarochelle ·

        In reply to Some Cross Platform Development Suggestions

        I write software that uses a lot of mathematical libraries. The shop I
        work for builds Fourier Transform Infrared Spectrometer and believe me
        this type of application requires a lot of mathematical algorithm (FFT,
        Partial Least Square modelisation and a lot of other spectrum
        manipulation). We did not have any problem using Java to build our
        software. On the contrary, the simplicity and good type checking of the
        language made the process rather smooth and quite productive. The
        number of libraries and free development tools available make Java a
        very good cross platform language. There might be a few libraries
        available in other languages that are not available in Java however
        overall I think that Java scores rather well. The build in libraries
        are an especially strong point with Java. Just look at the support for
        multithreading in version 1.5. I don’t think that there is any other
        language that comes with such a complete package. And this is just the
        tip of the iceberg. Look at the tools available for AOP, database
        persistence, etc…
        Of course even with a good language you still have to write good code.
        For cross platform development it is very important to use the right
        design (MVC, a layered design and other such good practice).
        Don’t take me wrong I love languages like Ruby and Python. Oh, by the
        way don’t expect Ruby to run faster that Java in its current
        incarnation. I did not perform rigourous bencmark but I did use it to
        perform a number of text processing tasks (Ruby is the script language
        I use for this) and it was noticeably slower that Java.
        I even love C++. I still write little programs at home to stay in
        shape. However, I am sure glad that we did the switch from C++ to Java
        at work. Java does a lot to help me spend Christmas at home and the
        evening watching TV and the nights in bed. I think that these days it
        is fashionable to bash at Java or at least to criticize it.
        Constructive analysis and critical analysis is good however when this
        is done in a context where a lot of it is happening you have to be
        careful because it could send newcommers on the wrong path and change
        the face of Christmas for to many innocent souls.

        JS

    • #3141824

      My Favorite Programming Projects

      by justin james ·

      In reply to Critical Thinking

      I was in a bit of a funk today, wondering when I will get to start working on some truly challenging projects any time soon. For the last year or so, most of the programming projects I have been involved with boiled down to a lot of Excel macros or C/R/U/D operations in VB.Net. Mind you, there is no shame in this; it pays the bills. Some programmers are delighted with these kinds of projects, and there is nothing wrong with that. But I like a little more meat with my potatoes. So I started reflecting on some of my favorite programming projects from the past, to find out what they all had in common.

      One great project I worked on (in terms of my enjoyment factor; personally I think the product was a dog and the marketplace did not want it) was writing filters for screen scraping software. The software was selective, it was made to extract information from various online databases. The job was a true challenge. In a nutshell, I would spend hours or even days analyzing the HTML code from search results, then crafting a king sized regex to parse the page. The debugging process involved changing maybe a few dozen characters in the regex per day and constant testing. What I really liked about this project was that it was highly creative; there was typically no one “right” regex to write, and the work was mostly searching for patterns. It was like being a treasure hunter. To this day, I still know my regex’s inside and out, thanks to that job, and I have a tendency to use regex’s for just about everything I can.

      Another wonderful project I was on was a personal project for myself. I was writing my own shopping cart system. Like any good hacker (in the purest sense of the word), I was not satisfied to create an online shopping cart using existing components. I devised the most devilish project specs possible: no libraries at all (even standard ones), no database server, upload-and-go installation, and so on. At the end of the day, I ended up writing my own session handlers, logging methods, a flat file database system, a template engine, and emulating a significant portion of .htaccess’s functionality (including emulating the status code 403 system), all in under 2,000 lines of code. This project taught me a lot (especially about RFC 2616) about good coding techniques, and making the most out of what I had on hand.

      In college, I had the opportunity to use StarLogo to put a scientist’s theories on ant behavior to the test. I closely modeled her theory in code, and then set it running. I also prepared a full post-mortem of the project. This project was a ton of fun for me on many levels. It was a great experience to model theoretical behavior in code. Even better, I was using code in an abstract level to create visual patterns and data sets to be analyzed. At the end of the day, I determine that the theory I was testing had a serious flaw, and by adjusting the program to work around it, I was able to make a much needed correction to the theory.

      What all of these projects have in common is my love for writing code that goes beyond the C/R/U/D process, and involves deep logic. They also all required a substantial amount of planning and thought before the first line of code was written. None of them could have been created by the “drag-n-drop” or “shake and bake” techniques that are filling the days of more and more programmers. Don’t get me wrong, I am grateful that I no longer need to keep writing the same database access code into every project. But my passion for programming goes far beyond seeing a project reach an end user and getting a paycheck. I truly love problem solving and attention to details. I have written more “get data from the user, validate it, stick it in the database, and display it to another user” projects than I can count, but I barely remember any of them. Projects that involve less mainstream code techniques like heavy use of recursion, functional programming, pattern matching, those are the projects that I remember. If I did not love programming, I would not be doing it. And those are the projects that I love the best.

      What are you favorite programming projects from the past?

      J.Ja

      • #3143987

        My Favorite Programming Projects

        by mark miller ·

        In reply to My Favorite Programming Projects

        I’ve had a few. The first one I’ve talked about previously. One of my first jobs out of college was working on a report generator utility that had a script interpreter. The whole thing was written in-house in C. There were some aspects of it that were a royal pain to deal with (tons of pointers and linked lists), but I still look back on it with some fondness. I finally got to see the insides of a language interpreter, and muck around with it, something I’d wanted to do for years.

        My first and second Windows programming projects (first one in C, second in C++/MFC). The first one was fun because I was just learning about Windows development. I learned a lot, and I got to work on a screen that had some complex interactions going on. The interesting thing was mapping the business rules to the appearance and function of the GUI. The second Windows project was one of the few times I worked on a truly OOP C++ project. Most of the code had already been written. All of the business logic was laid out in objects and methods. Design patterns were used to handle the complexity. I loved the way business logic was isolated. Modifications were a breeze, though I found it somewhat difficult to write new code as beautifully as the old. Sometimes I succeeded, but sometimes I didn’t. It takes experience to do that in OOP. The downside to the project, as far as the customer was concerned, was the people who designed the app. didn’t make efficiency a high priority. When I ran its database load logic through a debugger it was reloading the same data many times in one operation. The data was stored in hierarchical fashion, so this was part of what made it complicated. If the user did a search, it would go down a level or two, load that data, then if the search engine wanted to drill down further, it would reload the same hierarchy again, and go down one more level, all the while allocating, populating, and deallocating objects that this data was getting loaded into. It did this repeatedly, reloading the same data over and over again just to go down successive levels. It was a real performance bottleneck. The app. actually had data caching logic written into it, but they didn’t use it as effectively as they could have.

        In my last gig, my most interesting project was one I got paid the least to work on. It was a business app. for a small business in ASP.Net that primarily did accounting. It kept track of accounts receivable and the amount of service customers had purchased and used, and produced reports for some business intelligence. In addition it provided a facility for marketing that linked into Microsoft Outlook (a mailto: link and a little DHTML action), since the ASP.Net app. kept track of their client list, but Outlook was their preferred e-mail client. The challenge was there were so many inter-related pieces that had to show up onscreen at the same time. If the user changed one thing, it could have a cascade effect, causing other displays on the screen to change. I loved working on the business logic, and I relished the opportunity to do some custom client interface work. It really showed me the value of using objects to isolate logic in a web app. You can use straight, linear code, but it gets real messy. In one of my major modifications to the project, I created my own data-bindable read/write objects (in .Net 1.1 no less). It was the best thing I did. That made me learn the value of code generators… I’ll have to find a good one for my next project.

        I had a project recently that had a LOT of CRUD in it (very repetitive), but it had some icing on the cake. There wasn’t much business logic involved, but it gave me the opportunity to learn some new technologies. The primary app. was ASP.Net, but the customer wanted to be able to use Excel as an offline thick client. In hindsight I think this could’ve been done better/more efficiently, but the idea everyone latched on to was to have end users upload the Excel worksheets to the web server and have it enter them into the database, and vice versa–it would populate Excel worksheets and download them as well. I’ve heard some suggest using the CSV format, but that would’ve only worked for worksheet uploads. The downloaded worksheet had a LOT of formulas in it. Anyway, dealing with Excel was the most interesting part. I wrote logic in VB.Net on the server to automate it. Some of you are going to really balk at this, but no need to worry. Everyone was fully aware of the scalability issue. It was intended to be a little-used feature. At most one or two people would use it at a time, and only then twice a year. Once I got the hang of Excel automation, I got into it. It was actually fun. I’m not sure why they didn’t go the route of having Excel upload/download it’s own data to the database. I might’ve been able to write a .Net assembly for that. The caveat is they used OfficeXP. I’m not sure if it had the ability to interface with .Net at that point.

        I even had one small opportunity to use objects and recursion for one screen that displayed a hierarchy of organizations, whereby users could select different Excel worksheets to download. I had to only display organization levels that had data already entered for them. The app. also had to consider what level the logged in user was working at, and only display the appropriate hierarchy.

        Overall I enjoy projects that allow me to explore new technologies, and ones that require me to really think about what I’m doing. Repetition really gets me down.

      • #3142316

        My Favorite Programming Projects

        by sterling “chip” camden ·

        In reply to My Favorite Programming Projects

      • #3269070

        My Favorite Programming Projects

        by jamie ·

        In reply to My Favorite Programming Projects

        The current project that I am onn has been great. I wrote my own database to handle user reporting options, including indexing, and select, insert, update and delete options.

        I have worked on some really dodgy projects where the client has asked that an app be made to behave like another application (usually an accounting package), eg MS Word, Expedition (document and issues tracking), but my favourite is getting Outlook to behave like Project. I view these as challenges, even though I think it’s ludicrous to force an app to do something that it was never designed to do in the first place.

        I also worked on a project involving a web app  where the client wanted the flash removed from the app as it chewed up the bandwidth. I replaced the flash with straight html and JScript.

        In general I enjoy a project that lets me learn new thing/ or apply old things in a new way.

         

        Cheers

        Jamie

      • #3268952

        My Favorite Programming Projects

        by fjanon1 ·

        In reply to My Favorite Programming Projects

        One of my fun projects was a “trial and error” compiler for a 16
        pipelined bit-sliced AMD29K processors computer that would try all
        combinations to fun the fatest instruction ordering to fit an
        algorithm. We tried to write the code for an FFT (image processing) by
        hand and gave up after a day on a whiteboard. The day after I wrote the
        compiler/simulator, went to lunch while it was running on a VAX750,
        expecting it to take a day or so to find a solution and came back after
        an hour to see that it was done! I had to go through about 300 pages of
        logs because I couldn’t believe it, but it was actually working and
        even finding several solutions! It was running faster than a Cray X-MP
        at that time…

        Another one I just did several weeks ago is a tiny XML parser in
        Javascript. Several years ago, being a Java/C/C++ guy, I dismissed
        Javascript as a language of any interest. I took another look at it
        several months ago and found some really interesting articles about it
        (Crockford & Flanagan!!!). I started writing some fun and cool code
        and decided to write an XML parser in the train during my commute time.
        I wrote it as a state machine and it is amazing how small (3.5KB) and
        fast it is! It’s at http://www.geocities.com/fjanon/JavascriptParser.html
        I have always been a fan of state machines, a leftover from my early
        years as a logic designer. I miss using them and I really think we
        should use them more often in software development. I was stunned by
        the work done on Hierarchical State Machines by Miro Samek, I saw him
        at the Xerox PARC in Palo Alto a couple of years ago, I didn’t
        understand anything about HSMs that day but had the feeling that it
        could be dynamite. I bought his book, read articles about HSMs and I
        think I will use the concept soon! I think that it could be the way of
        getting GUIs without bugs… Check it out at http://www.quantum-leaps.com Note: I have no affiliation whatsoever with Quantum Leps, I only think that the work Miro did is really good.

        Fred Janon
        Fremantle, Western Australia

      • #3142595

        My Favorite Programming Projects

        by davet ·

        In reply to My Favorite Programming Projects

        Thinking back over the past 30 years, the degree of challenge in a project has had surprisingly little effect on how satisfying it was.

        My favorite projects have always been those where I have had a) control over design and functionality, and b) enough time to do it right.

    • #3142257

      Net Neutrality Hurts More Than It Helps

      by justin james ·

      In reply to Critical Thinking

      George Ou just posted another blog regarding Net neutrality, and I felt that the time was right for me to weigh in on this issue as well. His point is valid; why is it reasonable for a content provider to block or grant access, based upon who carries the last mile, but the last mile is not allowed to block or grant access? It isn’t reasonable at all. Even worse, let’s say that ESPN is able to extort a dollar per thousand users from my ISP. I know that my ISP is not going to start offering “premium broadband,” they are going to pass that cost onto me, regardless of whether or not I personally view ESPN’s premium content only.

      I am also quite disturbed at what this implies about Internet users. Web sites have been trying for years to derive revenue stream from memberships, micro payments on a per-view basis, and other techniques. Thanks to the availability of free alternatives, no Internet user is will to pay for content, regardless of how much better it may be than the free alternatives (example: Encyclopedia Britannica vs. Wikipedia). “Adult content,” some specialist niche content sites, and content where the access comes with a paid membership in a non-Web entity (such as access to The Economist’s Web site, provided with a subscription to the magazine) seems to be the only way a company can get a consumer to pony up cash. The fact that they need to extort the ISPs in order to generate revenues from content that cost them millions of dollars to generate is sad.

      You can thank sites like YouTube for this situation, as far as I am concerned. YouTube is quickly becoming a giant, public Tivo. I own a television, but do not have cable TV. However, I am sure that any TV show I may want to watch is on YouTube right now for free, copyrights be damned. If I cannot get it through YouTube, I can find it on a Gnutella network or BitTorrent. So the content providers are getting ripped off. First, they are losing viewers on the TV channel to the Internet, which is OK with me. Then, when they try to compete by putting their content on the Internet, often ad supported, they get ripped off and that content (or their TV content) gets put up on YouTube or P2P’ed, so the company cannot even generate ad dollars on their Web site.

      The most dangerous man in the world is a man with nothing left to lose, memories of what he used to have, and his back up against the wall. This is the position that the traditional content providers are finding themselves in. And they are getting more and more desperate. ESPN’s attempted power play is not the move of a clear thinking, rational company. It smacks of desperation. It is like having a heated discussion with someone and they cannot budge you from your position, so they hit you in the head out of frustration.

      I have a problem with “Net Neutrality” legislation on a number of levels, but the biggest one is that definitions are completely impossible. How do you define what is considered an ISP or even “the Internet”? Is my cell phone part of “the Internet” if I turn on my WAP browser? Is my home PC considered a content provider, simply because I have a static, public IP, a domain name pointing to it, and a BSD server on it doling out Web pages? Is an Internet anonymizing service considered a backbone provider because they act as a pass through for packets? If I follow the RFC for TCP/IP using carrier pigeons on the physical layer, are the “flying rats” New York now part of “the cloud” and subject to regulation? What if they refuse to fly to certain locations to deliver the packets?

      Any good law needs to have clear, understandable definitions. Period. When grey area, loopholes, wiggle room, and fudge factor exist in legislature, the people who suffer are those who do not have the resources to hire lawyers to interpret and develop justifications. That means you and me. Look at tax laws. I have the pleasure of paying full income tax for my bracket with only minor deductions. But a major company pays no taxes, because they hired a lawyer to base them out of the Bahamas. When I try to do that, it is called a “tax shelter” and I enjoy the benefits of three free square meals a day courtesy of the prison system. When someone with a clever lawyer does it, they get a bonus and stock options. And someone with billions pays less of a percentage of their income in taxes than I do, thanks to the magic of deductions (unless the Alternate Minimum Tax kicks in).

      “Net neutrality” legislation, thanks to the impossibility of ever writing it right, does nothing to protect you or me, or the small entrepreneur or anyone else, and will simply be sidestepped by any major, “evil” corporation. Furthermore, paying a premium for upgraded service is a time honored tradition. FedEx charges me more for next day delivery than for “best effort” delivery. Toll roads have no traffic lights. Airlines charge more for non-stop flight than for flights with layovers. And so on and so on.

      Thanks, but no thanks. I would rather see a total lack of regulations here and let the market place sort this out. If ESPN (or any other content provider) wants to try to force ISPs to pay for their content, I suspect it will fail. And if ISPs try blocking or limiting access to certain content providers will not probably succeed either. Customer hate cable TV packages with a passion. If ISPs try to move to the same formula, they will fail too. The marketplace has already made it clear that as much as they want premium content online, they are unwilling to pay for it. The marketplace simply does not care where the originator of the charge is, they are too accustomed and happy with paying one, simple, ISP bill and having full, unlimited access to accept anything else.

      J.Ja

      • #3143029

        Net Neutrality Hurts More Than It Helps

        by spdavies ·

        In reply to Net Neutrality Hurts More Than It Helps

        “Customers hate cable TV packages with a passion” – but we still have them, don’t we? To say the lowly consumer will be able to control whether ISP’s institute premium charges (if they are allowed to) is the height of innocence or the result of an agenda. There is only one high-speed provider in my area – I am at their mercy – if they decide to charge extra for Google, I will either pay or not Google – it’s as simple as that. In the same article, you state that premium charges are usual and accepted in other arenas; which is it?: not possible to happen because of consumer pressure or expected, likely and OK?

      • #3112369

        Net Neutrality Hurts More Than It Helps

        by justin james ·

        In reply to Net Neutrality Hurts More Than It Helps

        If consumers don’t like cable TV packages, why do something like 98% or 99% of people who can get cable TV get it? They are talking with their dollars. They would rather pay for cable TV as a package, paying for channels they do not use, to get the channels they do use.

        If consumers truly want to not pay for Web sites they are not using, they will opt to go to a different ISP or find an alternative. I may note that satellite broadband reaches nearly everywhere. Unless you are in a palce where satellite signal is impossble (I am well aware that satellite is less than perfect), you always have an alternative. Even when there is only cable but no DSL or vice versa, you can almost always get an alternative ISP over the same wires.

        Again, vote with your dollars. You don’t like the price you pay for cable TV? Don’t watch it. I choose to not have cable TV, because it would cost me a fortune to get the 5 or 6 channels I would actually watch. Personally, if I have to choose between paying more for Google (ISP blocking unless Google pays more) or ESPN 360 (ESPN blocking because the ISP refuses to pay more), and doing without, I will do without those sites. It is increasingly rare for there to be only one sole provider of a service or similar content. Block Amazon? I will use B&N. Block Google? I will use Yahoo! or MSN. Block ESPN 360? Oh well, I don’t follow sports. The only website that holds a monopoly is eBay. Block them, and now you have a problem.

        But outside of that, this entire issue is bogus, and flies in the face of business reality. People have choices, and are free to exercise them. No one holds a gun to anyone’s head and forces them to use a particular site, service, or ISP.

        J.Ja

      • #3112241

        Net Neutrality Hurts More Than It Helps

        by wmlundine ·

        In reply to Net Neutrality Hurts More Than It Helps

        ID”ten”T

    • #3142727

      Be Careful That Your Data Does Not Lie

      by justin james ·

      In reply to Critical Thinking

      I was playing Warcraft 40K: Dawn of War (correction 6/24/06: I mean, Warhammer 40K: Dawn of War) a few days ago, and noticed something intriguing about the victory screen’s statistics: while numerically accurate, they presented an incredibly misleading picture of how the game played out. The victory screen would bold the number in each category that it deemed to be “best,” but in reality, what they called the “best number” was not necessarily so. Numbers and graphs are a very powerful way of presenting information, and sometimes the only way of presenting information, but without the proper thought going into how the information is presented it is useless or even misleading.

      What Warcraft 40K (correction 6/24/06: again, Warhammer 40K) did wrong, in this case, was to make assumptions about whether a high number was better or worse. For example, the player who lost the least number of units was deemed the “best” in the category. However, that player was knocked out of the game very quickly! In other words, they lost 10 units and I lost 80, but the 10 units they lost were every unit they had, and the 80 I lost was a small percentage of my overall unit count. To put it simply, the units lost count lack an inadequate baseline to be compared against.

      Here are just a few examples of how the data you present to your users can be quite inaccurate in terms of what the consumers wants to know while being numerically accurate at the same time.

      To use some more tangible, real world examples, imagine that you are viewing a sales report. You see that a particular sales person sold $500,000 worth of product. That sounds great, right? But what if the previous sales person for that area used to sell $2,000,000 worth of product in the same span of time? What if the competitor’s product is selling $5,000,000 in the same amount of time? What if the other territories have twice the sales, but five times as many accounts? Again, without a proper comparison, the simple sales number of $500,000 is useless or misleading.

      Another example of this is stock market indexes. If the DJIA drops 400 points in one day, is that nothing to worry about, or should you start hoarding food? Well, it all depends on what the percentage of the drop is from its starting value. If the starting value was 1,115 then it is time to head for the hills. But if the starting value was 10,516 then 400 points is a much smaller problem.

      Even with a baseline presented, how do you know that the baseline is appropriate? Take market share, for example. What is the relevant way to measure market share? Is it by units sold, or by dollar amount sold? Let us compare two products. Product A has a price of $10 per unit, and Product B has a price of $5 per unit. If Product A sells 500 units, and Product B sells 750 units, then a unit-based market share shows Product A’s market share as 40%. But a dollar amount market share shows Product A’s market share as 57%. That can be some pretty useful math, being able to increase market share by 17%, simple by changing your definition of market share. But wait, let us add in one more piece of information: one unit of Product A has the efficacy of five units of Product B. In other words, you need to buy five times as much of Product B to do the job of one unit of Product A. Let us re-write our market share calculations in terms of “efficacy sold.” Product A sold 500 units of efficacy, but Product B sold 250 units of efficacy. Now, Product A’s market share based on simple efficacy sold is 66%. So, what really is the best way of calculating your market share in this situation? In this case, I would argue in favor of an efficacy-based market share.

      Percentages are another dangerous area. “Percentages can be tricky?” you ask. Yes, they can be. The value of a percentage all depends upon the relevancy of the total that is being used for comparison. For example, if I say “I am almost 20,” most people will assume that I am only a month or two away from my 20th birthday; after all, most people measure their distance from a certain age based on a 12 month time span. But what if I just turned 19 last month? By the standard measure of “almost” a certain age, I am only 8% of the way to the age of 20. But compared to how long I have lived, I am 95% of the way to the age of 20. Even a number as simple as a percentage can be misleading.

      In the same vein, averages can paint a different picture of reality too. Averages have a way of reducing outliers to the status of irrelevant when the data set gets large enough. Imagine that you are managing a call center, and you see an average hold time of thirty seconds. That sounds good. Now, what if you later discovered (deep diving after numerous customer complaints) that what was really happening was that for most of the day, the hold time was zero seconds, but during peak hours, the hold time was actually five minutes. Now your average no longer looks so great; it disguised your need for additional workers doing peak hours.

      Another dangerous data trap are time span comparisons. Take a toy store as an example here. If you compare Quarter 1 2006 sales to Quarter 4 2005 sales (current quarter vs. previous quarter) then you will be panicking every year because you will see a huge drop in sales. But if you compare Quarter 1 2006 to Quarter 1 2005, you are able to see a proper “apples-to-apples” comparison. On the other hand, for the sales for a non-cyclical product such as medication for chronic diseases, a current quarter vs. previous quarter comparison gives a more accurate snapshot of sales trends.

      What can you, as a developer do about this?

      Your goal is to serve your end users’ need for accurate, reliable, and understandable information. The first thing you should be doing is to always explain the methodology used in language that the vast majority of your users can understand. You can use technical mathematics language if statisticians are going to be using your data, but you cannot do this if your data is headed for a major news Web site. Another thing you should do is to put yourself in the shoes of your end user, or ask them what they need to know. Learn about the numbers. Find out what the numbers actually represent, and the typical data trends for that data.

      Typically, first order calculations such as a raw sales number are not very useful. Second order and third order (velocity and acceleration) calculations tend to provide much more data. It is one thing to say that a car is 800 feet down a quarter mile race track (first order number, distance). It is more useful to state that its current speed is 80 MPH. It may be even more useful to say that its acceleration is 9 ft/second2. And if you are the race car mechanic, you probably want a plot of the instantaneous velocity and acceleration throughout the race, to find out where the vehicle’s performance needs to be turned for maximum race speed. Meanwhile, the fans just want to know the total time and trap speed.

      This brings up the issue of graphical representations of data. We all like charts and graphs. My boss likes to joke with me and ask, “Can I get that as a pie chart?” In reality, many graphs are not as useful as they may appear. Like raw numbers, a graph without an appropriate baseline is not very useful. A chart of current sales figures for a year is good. The same chart showing the total market size and competitor’s performance lets the viewer see if the trend is specific to their product, or the market as a whole. A line chart showing a second or third order derivative over time is excellent (for example: market share over time, change in market share over time, etc.).

      You also need to decide how you are going to handle outliers. Are exceptional cases what your users are looking for? Or are exceptional cases to be discarded? Or maybe your users need you to remove the exceptions from the overall data set (so as to not mess up averages), but to point them out elsewhere. Again, you really need to work closely with your users.

      There are all sorts of data traps for the unwary, and these are just a few of them. Be on the lookout, and do not be afraid to ask your users questions if you think something in the project spec does not make sense. Most users would rather have you say to them, “I know this is what you asked for, but I am looking at the raw data and I think we can devise a more useful metric” than to be using a product that gives them a misleading idea of the data due to a poor metric selection.

      J.Ja

      • #3143025

        Be Careful That Your Data Does Not Lie

        by hive_node_349 ·

        In reply to Be Careful That Your Data Does Not Lie

        Surely you mean Warhammer 40k : Dawn of War? 😉

      • #3143016

        Be Careful That Your Data Does Not Lie

        by justin james ·

        In reply to Be Careful That Your Data Does Not Lie

        Yes, thank you for poiting that out… I had Warcraft on the brain, apparently…

        J.Ja

      • #3163916

        Be Careful That Your Data Does Not Lie

        by musicavalanche ·

        In reply to Be Careful That Your Data Does Not Lie

        Thank you for writing such a thoughtful and detailed essay on the tricky nature of numbers, statistics and those who try to interpret them in proper perspective. One of my favourite quotations is:  “Statistics are like bikinis. What they reveal is interesting, but what they conceal is vital.” I hope people take the time to understand your insight and apply it appropriately. Well done, J.Ja.

    • #3112435

      How I Improved Execution Speed By 100 Times in 5 Minutes

      by justin james ·

      In reply to Critical Thinking

      I have been spending a bit of time working on a small utility-type application. This application is designed to solve a problem that I have to deal with constantly, a problem that I am sure many other people have to deal with. It analyzes a delimited file, and determines the field types and maximum width of each field. All too frequently, a customer will dump a delimited file on me without providing any metadata (I consider myself lucky if I get field names, I often have to guess which field is which). To ensure accurate use of the data, it is crucial that I create a database table that is not too narrow, or contains any incorrect field types. On the other hand, making fields too wide or using more generic field types makes the database slow and uses storage inefficiently. After all, which is faster to perform a JOIN on, an integer column, or a varchar? How quickly will I fill up my storage space if I used 200 character width columns when I really only need 50 characters?

      So I wrote the application in VB.Net, a quick and dirty language for a quick and dirty job. Sadly, the performance just outright stunk. Once again, my low expectations of .Net were met. My first thought was that Perl would have run through the test file (about 30 MB large) as fast as my hard drive could dish it out. So I was about to set out to rewrite the program in Perl, when I decided to see if maybe re-working the VB.Net code could yield any speed benefits.

      With about five minutes worth of code editing, I reduced execution speed to approximately 1% of its former speed. In other words, execution speed is 100 times faster now. In fact, it now executes about as fast as I would expect equivalent Perl code to execute. I did not bother to rewrite it in Perl, because it is more than fast enough for my needs. Sadly, any performance numbers I would have gotten from it could not be published anyways, thanks to MSDN’s license agreement. (Correction 6/29/06: after re-reviewing the MSDN EULA, this is incorrect. I may publish benchmarks on the .Net Framework, it is one of the few exceptions to the general “no benchmarks allowed” rules in the EULA)

      What changes did I make to the VB.Net code to achieve this performance miracle?

      I dumped as much of the object oriented code as I could.

      Here is an example (clarification 6/26/06: this is not the real code, just some similar code for demonstration purposes):

      “Proper” Object Oriented Code

      For iRowCounter = 0 to cStringList.Length ? 1
            aSplitString = cStringList(iRowCounter).Split(sDelimiter.ToCharArray)
       
            For iColumnCounter = 0 to aFieldNames.Length ? 1
                  If a.SplitString(iColumnCounter).Empty Then
                        sOutput = sOutput & “Empty”
                  Else
                        sOutput = sOutput & aSplitString(iColumnCounter).Length.ToString
                  End If
       
                  sOutput = sOutput & ” “
            Next
       
                  sOutput = sOutput & vbNewLine
      Next
       

      Procedural Style Code

      iNumberOfRows = cStringList.Length ? 1
      iNumberOfColumns = aFieldNames.Length ? 1
      cDelimiters = sDelimiter.ToCharArray
       
      For iRowCounter = 0 to iNumberOfRows
            aSplitString = cStringList(iRowCounter).Split(cDelimiters)
       
            For iColumnCounter = 0 to iNumberOfColumns
                  If a.SplitString(iColumnCounter).Empty Then
                        sOutput = sOutput & “Empty”
                  Else
                        sOutput = sOutput & aSplitString(iColumnCounter).Length.ToString
                  End If
       
                  sOutput = sOutput & ” “
            Next
       
                  sOutput = sOutput & vbNewLine
      Next

       

      These examples are quite short, but you can see the difference. By making these type of small changes, I took execution speed from about 7.5 minutes to under 10 seconds.

      Unfortunately, as an example of clean OO code, the program now looks a little too much like procedural code for the comfort of some people. After all, I am using an OO language, right? So why use all of that procedural code? Well, I think the results speak for themselves.

      This example highlights an interesting thing: the less elegant code wins out. Is it easier to maintain code like this? Absolutely not. I have always said that each line of code is a potential point of failure; reduce your lines of code, reduce your errors and debugging time. Furthermore, what is the point of using an OO language, if you are going to strip away much of the OO aspects of it as soon as you can? And why take up memory duplicating all of that information when it is already available in the objects?

      I have read through a lot of source code in my life, and many programmers do indeed choose to use the OO style as much as possible. The code does look better, and it retains an element of elegance. But as my experience, time and time again has shown, the OO style executes extremely slowly. The reason lies in the very nature of OO code. For each of those row length lookups, the system is not caching the length, even if it has not changed. Even if the length is stored as a property (as opposed to being), that information still has to be dug up. This means burrowing through the object tree to get at the underlying data, and then bubbling it back up until it hits that For condition. In the procedural style, that information is already on hand in the final type needed.

      As a rule of thumb, if a property (or method return) will be used more than once before it changes, it is nearly always faster to store that information to a temporary variable. Of course, there are exceptions to this. If you are going to be working with an insane amount of temporary variables at once, the memory used to store them may go up so quickly that you are burying the needle on RAM usage. An example of that is if the items in the property are absolutely huge (say, 100 MB of data in each one) or if the application is highly multithreaded. In a situation like that, you need to decide between memory usage and CPU utilization. But for your average utility loops, it is a sure bet that storing those values in temporary space (especially things used in a condition) is going to reap huge performance benefits.

      So choose your poison: ugly, harder to maintain, less elegant procedural-esque code, or slower to execute, pure OO code. Regular readers know my emphasis on end user satisfaction; slow code makes no one happy. I prefer to break the OO paradigm in order to reap speed benefits any day of the week.

      J.Ja

      • #3112421

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by sterling “chip” camden ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        I don’t consider it a violation of any object-oriented rule to cache a value that, by definition, you know will not change during the course of execution.  If there were a possibility that the values could dynamically change, then you would need to evaluate as you go.  Your optimization is really just binding to the data earlier rather than later.  This applies to non-OO languages as well, if a function must be called to obtain the data.  I would make the same optimization in any language from Assembler to Ruby.

      • #3112387

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by justin james ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Sterling –

        Personally, I would agree. Unfortunately, I have seen too much code written by people trained in OOP techniques to think that this is the way people are being taught. Maybe it is a side effect of my learning to program pre-OOP days (my first language was line-numbered BASIC on an old VMS or UNIX system, followed by COBOL, and Pascal was the “advanced” course), and never being formally taught OOP techniques. But I see the results from a lot of college graduates, and that is the way they code. They have a deep aversion to doing any kind of caching or binding or anything else that I discussed. I can only assume that this is something taught in schools or in current programming books; maybe it is a Java thing, but I see it a lot in PHP code as well.

        J.Ja

      • #3112320

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by mark miller ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        This goes back to a point you made in an earlier post, that programmers need to pay attention to the underlying behavior. I wonder if this is a difference between VB.Net and C#. I remember back when .Net 1.0 or 1.1 was being introduced, I believe it was Anders Hejlsberg who said that in C# the non-variable value in a test condition in a for-loop is cached to optimize the test. The C# language group did this so that you would not have to do what you did in your VB.Net code. I imagine the effect can create unexpected behavior if the test is intended to have dynamically changing values.

        So, it should be possible to write (in C#):

        for (int iRowCounter = 0; iRowCounter < cStringList.Length; iRowCounter++)

        and the for loop will cache the cStringList.Length value. Or, perhaps this ability is in VB.Net is well, but the compiler flags the expression as non-cacheable, because you take the cStringList.Length value and subtract 1 from it. In other words, maybe it only caches it only if the value comes straight from a property. Just speculating. I’m no expert on VB.

         

      • #3112236

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by cyount ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        I agree w/sterling. This is not a difference between OO and procedural code. OO is about inheritance, polymorphism, separating implementation from interface, motherhood and apple pie. It is not about whether you use intermediate variables or not. If students are being taught that, their instructors are missing the big picture.

        The improvement you made to your code is what compiler writers call “loop-invariant code motion.” There are a definition and another example of this at http://en.wikipedia.org/wiki/Loop-invariant_code_motion. A good compiler can do this automatically if it can determine that there is no code in the loop (including side-effects of any function calls) that can modify the potentially invariant expression. If you’re using a language that’s interpreted or has a less-aggressive compiler, like VB or Perl, you’ll need to move the invariant code yourself. This is just good programming, whether you’re writing OO or procedural code.

      • #3112092

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by sterling “chip” camden ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        BTW, I would bet that 90% of the performance improvement was obtained by caching the char array.  Repeated manipulations of String objects are notorious for slowing down .NET applications, whereas accessing a numeric property and decrementing by 1 shouldn’t be too expensive.  Still a good practice to cache both kinds of results, so long as you know the results should not change.

      • #3112083

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by justin james ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Sterling –

        I think the CharArray was indeed a major help; the code I posted here is “similar to” in concept to the code I was actually using; in reality, there were a LOT more lines of code. For example, I put the result of a common condition into a boolean once per loop, tuned a regex a bit, and so on and so on. But the real code is a bit hairy to read (lots of zillion condition conditionals…), so I wrote similar code for the blog.

        Mark –

        I would imagine that since both C# and VB.Net become .Net managed code anyways, that they would have the same caching performed. It is indeed possible that the calculation would make a difference… but since it is a static number in the calculation, if the compiler is smart enough to cache a variable, it could/should be smart enough to cache the calculation!

        I am going to see if I can get in touch with some folks at Microsoft (as well as Sun, to ask about similar issues in Java) for an explanation. I am also curious how Java and .Net handle multi-core and/or hyper threaded processors, to get multiple threads going without the programmer explicitly creating them, to improve performance.

        J.Ja

      • #3112041

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by robin ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        As sterling already mentioned, much of the improvement probably came from removing object allocations (like the char array) rather than caching the values.  If you use CLR Profiler, VTune, kernrate, or similar profiler you should be able to determine why the performance was so absymal.  I also recommend you look at the CLR performance counters (check out the .NET CLR Memory counters to see if GC was an issue due to large numbers of objects, particularly % time in GC, the heap sizes, and the number of collections for each generation).

        As for HyperThreading and Multi-core processors, in and of itself the .NET framework won’t do anything automatic here (at least not yet).  Paralellism is a hard problem, and Microsoft, Intel, and AMD are working on it, but today you’ll have to either explicitly manage your own thread creation, or use the .NET Threadpool (QueueUserWorkItem API) to queue pieces of work onto a .NET managed thread.

      • #3110958

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by mark miller ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Justin-

        It all depends on what IL bytecode the compiler generates. Just because it all gets translated into IL by the compilers does not mean they do the same optimizations. I’m not trying to make a “one language is better than another” argument here. Just answering your point. The C# compiler does the optimization of moving the cStringList.Length reference out of the loop, storing it in a generated temporary variable, and using the temporary to do the comparison, while the VB.Net compiler may not. Another example of this sort of difference is their C++ compiler. It features what’s called “whole program optimization”, where it looks beyond just conditional statements and loops, and looks to see if there are things that are of wider scope it can optimize. None of their other compilers have this feature.

        C# has other language features where it generates extra IL behind the scenes. For example the using() clause, which automatically calls the Dispose() method on objects declared within it which support the IDisposable interface.

        To the other responders-

        A couple people mentioned that Justin is caching the string. I don’t see where he’s doing this. He’s caching the value he passes into the Split() method (an enumerator), and he’s caching the string length (integer) values. Otherwise, the rest of the logic is the same. He’s still calling Split() repeatedly within the outer loop to create aSplitString.

        From my reading of the code, all he cached were essentially integer values.

      • #3110772

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by aikimark ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        It’s pretty sad to see the VB.Net compiler guys missed this optimization.  I would say you’ve found a performance bug.  Good for you if you report it.  (and good thing for all of us)

        In addition to caching values that don’t change in a variable with the proper datatype, you can also gain a lot more efficiency in how you build your output line.  Consider the following two alternatives to string concatenation:

        • use a stringbuilder object
        • create a buffer with your sOutput variable sufficiently large to accomodate both the data, delimiters, and NewLine charaters.  Replace concatenation with MID$() functions that place the characters in their proper position in the sOutput ‘buffer’.

         

      • #3110767

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by aikimark ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        It’s pretty sad to see the VB.Net compiler guys missed this optimization.  I would say you’ve found a performance bug.  Good for you if you report it.  (and good thing for all of us)

        In addition to caching values that don’t change in a variable with the proper datatype, you can also gain a lot more efficiency in how you build your output line.  Consider the following two alternatives to string concatenation:

        • use a stringbuilder object
        • create a buffer with your sOutput variable sufficiently large to accomodate both the data, delimiters, and NewLine charaters.  Replace concatenation with MID$() functions that place the characters in their proper position in the sOutput ‘buffer’.

         

      • #3110761

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by justin james ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        aikimark –

        Yes, StringBuilder is definitely faster. ListOfT is faster than array as well. In the actual code (the code I posted was similar in spirit, but not the code I was actually working on) does not do any string concatenation at all, and I will probably be converting it to ListOfT tonight or tomorrow night.

        Thanks for mentioning these items, because you are abolutely right about StringBuilder!

        J.Ja

      • #3110681

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by sterling “chip” camden ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Mark: the cDelimiters char array is being cached, which eliminates some string manipulation within the loop.  There isn’t a way to avoid doing the Split within the loop, because the results differ each time.

        Regarding the performance improvements of using StringBuilder, see http://chipstips.com/showtopic.asp?topic=cshacc

      • #3111697

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by i’d rather be boarding ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        You seem to have missed the point somewhat here.  What you are talking about is nothing to do with OO, but just simple basic common sense programming that a schoolboy should know about.

        It is blindingly obvious that you should avoid function calls (e.g. string.length) and sensibly cache a value if you know it isn’t changing.  It’s not rocket science, its simple logic. It’s also nothing to do with OO.

        You are confusing OO with bad programming – please do not do that.

        You seem to be surprised at your “amazing findings”?!?! They are just basic common sense that any junior programmer should know.

        Plus, why are you using string concatenation – its slow too – use the appropriate functions (like string buffers) for speed – again, obvious simple stuff that anyone should know (yes, I know this is an example, but that’s no excuse anyway.)

        This style of code is perfect as an example of what should be picked up by peer-review, where the unsuspecting newbie makes the mistake of coding like this – a simple explanation and a bit of mentoring will put them on the correct path.  If I saw a programmer, other than a complete novice, doing this, I would be having serious words with them rather quickly.

        Dominic.

      • #3113390

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by auri ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        You’re still getting killed by string manipulation. Using the & or + or += or &= concatenation operators is very, very expensive. I didn’t see you declare sOutput, btw. I’d like to see the rest of your code for optimizations, but here’s the string manipulation-optimized version.

        Microsoft recommends using a StringBuilder for long string building operations. For short string operations use the String.Concat(string1, string2, string3, etc.) method.

        Try the following code and let me know how much faster things are:

        Dim stb As New System.Text.StringBuilder() ‘ you can initialize with an initial string if you like, such as

        For iRowCounter = 0 to cStringList.Length ? 1

        aSplitString = cStringList(iRowCounter).Split(sDelimiter.ToCharArray)

        For iColumnCounter = 0 to aFieldNames.Length ? 1
        If a.SplitString(iColumnCounter).Empty Then
        stb.Append(“Empty”)
        Else
        stb.Append(aSplitString(iColumnCounter).Length.ToString())
        End If

        stb.Append(” “)
        Next

        stb.Append(vbNewLine)

        Next

        Return stb.ToString() ‘ This is where you get your string back. (or sOutput = stb.ToString() if you like, although that’s likely a waste of memory)

      • #3113388

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by aikimark ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Justin,

        This is the first I’ve heard of ListOfT

        Can you suggest a good URL for me to gather information on ListOfT?

      • #3113380

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by kbuchan ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        FYI:  VB.Net (and I assume .Net languages in general) caches the “cStringList.Length ? 1” value for the For loop, so there’s no performance benefit to making the code harder to read/maintain for this.

        I understand that the code posted in the article was not the real code and was simply similar, but I still feel that the optimizations were reasonable.  In fact, I would only accept code that had not been so optimized from my most junior developers… and even then I wouldn’t accept it after the first couple of times.  It’s a basic premise that you move invariant results outside of loops.

        Cheers!

      • #3113339

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by snoopdoug ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Good grief? Doesn’t the VB.Net compiler do any optimization? Loop invariants should always be replaced by a local variable. Even the crappy mini-compiler I wrote in CS400 could do that!

        Did you try any optimizers? If so, which? If not, why not?

        doug

      • #3113334

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by dbmercer ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Sadly, any performance numbers I would have gotten from it could not be published anyways, thanks to MSDN’s license agreement.

        I was not aware that MSDN’s license agreement restricted publication of performance information.

      • #3113320

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by madestroitsolutions ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        I agree with the author but I also think it depends on the underlying behavior of classes. There is always a cost associated with using properties that either hide underlying values or calculate them real time but I think that cost can be negligible depending on the circumstances. Like someone else mentioned up there, we need to be aware of the inner workings of classes we use.

      • #3113318

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by justin james ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Wow! Lots of comments to respond to!

        Aikimark –

        I made a few minor mistakes in my previous comment (why do I insist on replying before I’ve had the first pot of coffee for the morning?) StringBuilder is faster, but only when concatenating about 6 strings together. also, ListOf Should have been List<T> (I think of it as “List Of” in my head); and it is faster than ArrayList, I have not seen its performance compared to a standard Array. Here is a link to an excellent article which discusses both the speed of StringBuilder and List<T>:

        http://msdn.microsoft.com/msdnmag/issues/06/01/CLRInsideOut/default.aspx

        Dominic –

        “You are confusing OO with bad programming – please do not do that.

        You seem to be surprised at your “amazing findings”?!?! They are just basic common sense that any junior programmer should know.”

        I agree with you here; as I’ve mentioned before in earlier comments, I see way too much code written like this. It seems to be pretty common. One of the chief offenders on this code style are code examples in books, magazines, and documentation. I understand that these sources are often limited in their space, and have a desire to keep the example short and simple. But many people just copy code (or techniques, or style) straight out of the book. The end result is code written like this. For example, one coder I know, prefaces all of his variables with “my”. For example, “myCounter” and “myFileName”. Why? Because he copied some code out of some help file that was written like that ages ago, and became convinced that it was how good programmers wrote code.

        What I call that “OO Style” is particularly rpevalent in OSS PHP apps, for whatever reason, and Java programmers. My theory is that this is something being taught in colleges or is a “standardized style” somewhere, like K&R or Turkish Notation, but with code technique instead of source code formatting.

        I definitely think there needs to be a really, top notch resource or book written on this topic. With the number of self-taught programmers out there (I am one of them), the proliferation of bad example code (it gets even worse when you Web search for code) helps get new programmers into bad habits early, and they think that this is how it is done.

        Auri –

        StringBuilder does not show performance gains until you are concatenating about 6 strings; unfortunately, the article I got this information does not say if this means “simultaneously” or “aggregate”. I will (one of these days) test to see if that means that StringBuilder is only faster when doing something like String = 1 & 2 & 3 & 4 & 5 & 6 of if it also applies for multiple string operations. To be honest, I almost never produce large amounts of string text and send it to a variable, I almost always have large string information going out to a stream (network socket, file, etc.), so I do not really get a chance to see the differences between StringBuilder and simple concatenation.

        SnoopDoug –

        I did not use any optimizers, just standard Visual Studio settings.

        David –

        I re-reviewed the MSDN EULA, and it turns out that while they have a blanket “no benchmarks to be published without Microsoft’s approval,” there is an exception (with many, MANY conditions) for the .Net Framework. From the MSDN EULA (http://msdn.microsoft.com/subscriptions/downloads/EULA.pdf):

        Microsoft .NET Framework Benchmark Testing.

        The software includes the .NET Framework component of the Windows operating systems (?.NET Component?). You may conduct internal benchmark testing of the .NET Component. You may disclose the results of any benchmark test of the .NET Component, provided that you comply withthe following terms:

        Followed by a page or so of terms…

        Thanks for all of the great feedback!

        J.Ja

      • #3113145

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by kevmeister ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        This is not a difference between OO and procedural code. This is a performance optimisation resulting from moving invariant expressions out of the loop.

        Several people have said the compiler should have done this automatically – I’m not so sure if achieving this optimisation is as simple as many think it should be.

        The chief problem is that “properties” might be public member variables or they might be accessor functions. How in fact does a compiler actually determine that the expression it is dealing with is in fact constant? If it is an accessor function how does it know what the behaviour of the accessor function is in computing the value of that property? Even if it is a public member variable, how does it know that the some other part of the loop will not cause a side-effect resulting in that public member variable being modified. I think an optimiser needs to be thoroughly clever to determine these kinds of things, if it is possible at all.

        I am primarily a C++ programmer and I would make these kinds of optimisations as a matter of course, if they were likely to be material to the performance of the application. My philosophy is to try and write fast code in the first place within the limitations of the language, not write slower code and hope the compiler is clever enough to optimise it for me.

      • #3210546

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by gardoglee ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        I think the original contention as to the teaching of this type of coding is relevant.  It has become all too common for students to be given a set of invariant rules which they are to learn as rote without understanding of the underlying reason for the rule.  This leads to very bad code when the purpose moves from overly simplistic examples in class to real world problems.  This is not unique to OO nor tor any other type of language, and is not a new problem.  K&P would not have needed to write a book on the topic if it hadn’t already been epidemic thirty years ago.

        The real problem is the idea that good code froms from rote repetition of simplistic coding rules, rather than from an understanding of the underlying priciples.

      • #3209966

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by sterling “chip” camden ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Well said, gardoglee.  The best system for programming is the one inside the programmer’s skull.

      • #3206829

        How I Improved Execution Speed By 100 Times in 5 Minutes

        by joa_farore ·

        In reply to How I Improved Execution Speed By 100 Times in 5 Minutes

        Firstly?I?agree?with?everyone?that?the?char?array?allocation?and?string?concatenation?are?the?problems.
        With regards to VB compiler optimizations, since I know we all love to talk sh!t about things we don’t know, VB acutally does cache the upperbound of its For Loops. C#, however DOES NOT. I’ve looked at the CIL emitted by both the csc and the vbc for For loops in both languages.

        However, the runtime does other optimizations as Just-In-Time compile time. With For loops the caching actually hurts performance (potentially) when dealing with arrays because the JIT is unable to recognize that the upperbound you’re using is based on the array length. When you don’t cache it’s sometimes cache for you and will emit machine code that doesn’t have redundant array bounds checks. When working with collections however it does not to this and the VB auto caching is a benefit. Furthermore VB For Loop and C# for loops are very similar but can yield slightly different code as the VB upperbound is inclusive (i <= 10) and the C# is exlusive (i < 11).

        No offense to the author as he did take the time to refactor his code significantly using his own experience and wisdom but often the uneducated and unfamiliar write “benchmarks” without proper consideration for a lot of what goes on behind the scenes?(like?running?benchmarks?on?code?compiled?in?DEBUG?MODE) and end up reporting lots of erroneous information that circulates around the internet.

        With regard to what is and isn’t faster I agree that caching usually is but that the underlying platform can defy expectations so always MEASURE MEASURE MEASURE! (Plug: System.Diagnostics.StopWatch)

        As a last note, the use of variable names for those for loops is god awful. r and c would have been fine. y and x as good for us math folk. row and col(umn) would have been fine but they really did distract terribly from the code.

        JoA_Farore, VB.NET Windows-based .NET 2.0 MCTS

    • #3111494

      Eclipse.org Has Egg On Their Face!

      by justin james ·

      In reply to Critical Thinking

      I must admit, the Callisto release of Eclipse has intrigued me, enough to sit on their Web site waiting to download it. For days (probably much longer), the site has had a clock counting down to the moment of release: 6/30/2006 10:00 EST (45 minutes ago). The site has replaced the clock with a “Callisto is coming soon” message. Maybe I am just being silly, but how hard is it to simply post a link, when the thing has supposedly been ready for some time? (update 6/30/06 13:00 EST: the “Calliston is coming soon” message now has added onto it: “no really, we promise.” Meanwhile there is already an article linked to on their Web site about how Callisto has shipped on time; I recognize that a few hour slip is nothing, but I just find this thing funny).

      What is even funnier, the front page of their site has a link to an article about how Eclipse/Callisto is an example of how large, OSS projects are able to hit deadlines. Whoops!

      J.Ja

    • #3113070

      Compiler Optimizations Can Only Go So Far

      by justin james ·

      In reply to Critical Thinking

      My most recent post (“How I Improved Execution Speed By 100 Times In 5 Minutes“) generated some tremendous feedback. One of the things that was touched upon a number of times was the possibility of the compiler or managed code environment caching values used in loops, or automagically transferring the value to an invariant, as I did in my sample code. One poster, Kevmeister, made some great points, which I would like to discuss in more detail.

      Here is what Kevmeister says regarding the compiler performing these kinds of optimizations:

      The chief problem is that “properties” might be public member variables or they might be accessor functions. How in fact does a compiler actually determine that the expression it is dealing with is in fact constant? If it is an accessor function how does it know what the behaviour of the accessor function is in computing the value of that property? Even if it is a public member variable, how does it know that the some other part of the loop will not cause a side-effect resulting in that public member variable being modified. I think an optimiser needs to be thoroughly clever to determine these kinds of things, if it is possible at all.

      Well said indeed. The compiler really has little way of knowing for sure that the value has not changed. Even if the compiler tore apart the source code to the object used to generate the property, it has little way of deciding what could be cached and what could not be cached, unless it was able to track the value back to a hardcoded number or something declared as a constant.

      Another issue that Kevmeister did not mention is multithreading. What happens if the looping code is in one thread, and another thread does something that would change that value? It is bad enough in a For loop, where the idea of the check value suddenly jumping around could be quite dangerous, but that may indeed be the desired behavior in a While or Until loop. Some people will put a loop in one thread, checking a value that gets changed by a separate thread. So in a multithreaded situation, caching the value is not very helpful.

      Another unmentioned idea for caching these variable would be for the compiler to attach a dirty bit to some values. If the value itself (or any underlying data) were to every change, the dirty bit would be set. It would barely impact performance. Any variable can subscribe to the dirty bit of another variable, allowing a bubble up effect. This idea has some merit, but although the overhead on an individual level is rather small in most cases, on a mass scale, it could be devastating to performance. And it still does not address the issue of what happens when the value to be checked is derived rather than stated. Consider the following loop statement (I just made up a timer class here, please ignore any similarity to an actual object):

      while dtTimeToStop >= timerProcessTimer.CurrentRunTime()

      If the compiler attempts to cache CurrentRunTime(), there will be a severe performance ding. Even if the compiler is updating its cached value periodically, the cure is worse than the disease.

      Kevmeister also pointed out that something within the loop may change that value. What he does not say, and blow the whole idea out of the water, is what if accessing the value itself changes it? Here is a fake class:

      Public Class Class1
          Private intWhenToStop As Integer
          ReadOnly Property WhenToStop() As Integer
              Get
                  If intWhenToStop > 0 Then
                      intWhenToStop -= 1
                  End If             Return intWhenToStop
              End Get     End Property
          Sub KeepGoingUntilDone()         Dim intCounter As Integer
              For intCounter = 0 To WhenToStop
                  'Do Stuff
              Next
         End Sub
          Public Sub New()
              intWhenToStop = 500     End Sub End Class

      See the problems with caching or otherwise touching WhenToStop() by anything other than the programmer’s code? Yes, it is a rather bizarre example. But I have seen situations where this kind of code does indeed make the most sense (especially before the first few pots of coffee).

      So the idea of the compiler or managed code environment automatically handling these situations is really not such a great one.

      But that still does not let the compiler off the hook completely. It is my belief that just as the compiler throws out warnings and errors, it should also throw out suggestions and hints as well. Visual Studio is very intelligent about many things. I could see the IntelliSense system being beefed up to give some performance tips. Not many programmers get taught any particular language, and far too few programmers read about programming outside of reference books for syntax lookup. With that in mind, I think that while the compiler automatically handling these types of issues is not a good idea, the idea of the compiler making some helpful suggestions is not a bad one at all.

      J.Ja

      • #3112886

        Compiler Optimizations Can Only Go So Far

        by mark miller ·

        In reply to Compiler Optimizations Can Only Go So Far

        I have known language environments to do loop optimization like you’re discussing here, but typically it was rather dumb optimization. In other words, it cached the test value no matter what. I have no idea if Microsoft put in “smart” optimization into the C# compiler or not. It’s often up to the programmer to find these things out by trial and error since optimizations are not well documented. Even so, if the optimization screws you up, it’s possible to construct a loop that bypasses it, or you can just turn optimization off. I checked project settings in C#, and optimization is an option you can turn on and off. I assume it’s the same in VB.Net. The default in VS.Net is that optimization is turned off for debug builds. It looks like it’s turned on by default for release builds.

        Going back many years I’ve seen compilers that have optimizing as an option, with it turned off by default. This is best. Optimizers used to look for patterns in the code, and then replace them with templated optimizations. This can change the semantics of what you’re doing, so one should ALWAYS test optimized builds.

      • #3112873

        Compiler Optimizations Can Only Go So Far

        by justin james ·

        In reply to Compiler Optimizations Can Only Go So Far

        Mark –

        Thanks (as always) for the insightful comment!

        You are absolutely right about that optimization flag existing in VB.Net. You are also (quite sadly) right about the lack of documentation for these things. After all, if someone see an optimization flag, who wouldn’t check it? That’s like if you take your car in for an oil change, and he says, “I can give you a tune up for free since you’re here.” And then you find out that the “tune up” involved reprogramming your fuel injection system to work the way he thinks is best. It could be better… but it might not be better.

        I will be writing in the very near future about just this, the total lack of information regarding optimization and performance. The information needs to be accurate, and instead of having to read dozens of magazines and books each month, it needs to be baked right into the code environment. Stayed tuned.

        J.Ja

    • #3169248

      Training And Documentation Needs To Teach How As Well As Why

      by justin james ·

      In reply to Critical Thinking

      I recently attended training for Cognos Metadata Modeling. This experience highlighted for me one of the key difficulties in becoming a top-notch IT professional. Maybe it is a result of the IT profession still being a relatively new industry, or the demands of students, or something else, but training and documentation tend to almost exclusively address the how of performing a task, without explaining the why.

      For example, the Cognos training manuals showed us how, step-by-step, mouse click-by-keystroke, to go through the motions of transforming the source data into a usable data model, suitable for report creators and other data consumers. What the manual did not spend much time on at all was why we were making the decisions that we were making in the first place. Luckily, the instructor (Brian) did a top flight job explaining this as we went along, as well as answering all questions about that aspect of the material. The fact remains, if it were not for Brian, I would not have learned anything that F1 could not have taught me. In short, the curriculum as formally laid out was a five day course in how to click, copy/paste, and drag/drop.

      This training course was not an isolated incident. During my brief stint as a Computer Science major (regular readers may remember that I double majored in “can’t-get-a-job-ology”), the courses were extremely concentrated upon theory, the why of what programmers do. The coursework at my college was extremely why oriented. The vast majority of the students were constantly complaining about this. They wanted to be taught how to program in a particular language. Indeed, it was assumed that you knew the language used for the class, and it was up to the student to learn it on their own. For example, CS 111: Introduction to Programming used Pascal to teach the basics such as variables, loops, control structures, and so on. The next course (CS 112: Data Structures) used C, with no explanation of the C language. I had the good fortune of looking at the syllabus in advance and learning C over my summer vacation. The bulk of the students did not do this, and were tripping over C, which was merely being used as a tool to teach data structures.

      In retrospect, I wish that I had stuck with the Computer Science program there on some levels. It was the only time that I was offered a chance to really learn Computer Science as a science, as opposed to a vocation. That is the difference between a why oriented education and a how oriented education. Learning the why means that you understand the underlying theories and principles, without necessarily being taught the implementation end of the process. The how based education teaches you to implement something, but you really do not learn how to make the architectural decisions that you need to craft quality code.

      Unfortunately, as my college experience showed, this is exactly what students want. Schools such as DeVry and Chubb (I have talked to a number of graduates from their programs, and inspected the coursework and books) are how oriented. They teach students to lay bricks, but not to design the building. The Cognos training was the same way. We were being taught how to have the software do what we wanted it to do, but not how to know what we needed it to do. The end result? “Shake and bake” programmers.

      There is nothing necessarily wrong with “shake and bake” programmers; someone needs to do the grunt work. But at the end of the day, it is the grunt work which often has the biggest impact on end user satisfaction. If the person writing the code that parses the data file has a poor understanding of things such as the impact of various file reading techniques on performance, the program will run slowly. If the person designing the interface has no knowledge of usability theory, the application will be difficult to use. The programmer ignorant of proper optimization of SQL statements is going to be writing code that makes the server beg for mercy. And so it goes.

      Language documentation is frequently not much better. It serves as a syntax reference, and not much else. Rarely does the documentation for a language offer any suggestions or tips as to where, when, and why you would choose to use one technique as opposed to something else. The Perl documentation is the only significant exception to this that I am aware of. As a result, people using the documentation as their reference rarely learn that there may be a better or easier way of accomplishing a task. For example, a programmer coming from a language without a built-in, optimized file slurper object may simply try to rewrite their standard file slurping code in the new language, without knowing that there is an easier, more optimized method available to them. This is where the Perl documentation shines; the FAQ’s are complete why oriented. They are laid out like, “I am trying to accomplish XYZ, what is the best way of doing it?” The answers show not just the best code, but often have different variations with an explanation of which one is best in which situations, and why. More language references need this kind of documentation, as opposed to the current system which is more or less a syntax reference and an explanation of what each parameter is and what the output is.

      Over the years, I have held onto my passion for programming. As a result, I am always reading things that are at least indirectly tied to software theory. Even when I am reading something that I do not think is related to computers, I start to wonder how the can be applied to software design and development. I spend time experimenting with languages that are not popular, to see how the methods they use can be applied to my job, and evaluate the suitability for use in projects. And so forth. The why of software is what interests me, not the how.

      A properly designed architectural document for a program is the essential blueprint for implementing a software project. Without a good understanding of the why of code, it is very difficult to construct the best possible how. This is why I think that a major shift has to occur in the attitude towards training and documentation.

      J.Ja

      • #3168363

        Training And Documentation Needs To Teach How As Well As Why

        by vaspersthegrate ·

        In reply to Training And Documentation Needs To Teach How As Well As Why

        Your job is not to ask why. Your job is to do or die. Ha. My big complaint about technical documentation is that steps are often left out, too much is assumed.

        As far as knowing why, the Why makes the How more memorable. To only know the How, that’s a robotic, mindless, zombie attitude. If you know the Why, you will also be more adept at work-arounds and such, you can think outside the box.

      • #3168342

        Training And Documentation Needs To Teach How As Well As Why

        by justin james ·

        In reply to Training And Documentation Needs To Teach How As Well As Why

        Vaspers –

        Great point about how understanding the why makes remembering the how easier, I totally missed that. 🙂

        J.Ja

      • #3210576

        Training And Documentation Needs To Teach How As Well As Why

        by almost_there ·

        In reply to Training And Documentation Needs To Teach How As Well As Why

        Amen.  I used to drive my instructors nutso asking “why, why, why?”  I felt like a 5 year old constantly asking why, often not getting satisfactory answers, but usually a how explained another way. 

      • #3212334

        Training And Documentation Needs To Teach How As Well As Why

        by snoopdoug ·

        In reply to Training And Documentation Needs To Teach How As Well As Why

        I’ve been writing technical documentation for over 20 years, mostly SDK work for companies such as Intel and Microsoft. Compilers, in-circuit emulators, Microsoft Exchange, Tablet PC, and so on. That and a couple year stint as a server-side Java developer. My degree is in CS.

        You pose a great question. I will give you some reasons why the answer might not be what you like.

        Technical documentation, such as an SDK, is written from the bottom up. You would never accept an SDK without it, right? So that’s where we start. Once the Reference section has been vetted (that’s a fancy law term for technical review), we move on to the User’s Guide section, where we describe “how to”. So far, so good.

        Now it gets tricky. (And note that much depends on the resources available to create the documentation. In larger SDK projects you will have more than one writer on the Reference section and perhaps anotherr on the User’s Guide. Toss in an editor and a part-time illustrator and you rapidly approach too many cooks!). Is there any time left in the schedule? If so, should you use that precious resource to create examples for each of the 100 classes in the Reference section? Should you create a Cookbook section in the User’s Guide where you describe how to do the dozen most-important tasks step-by-step? Should you create a Tutorial where you walk the user through creating a new widget or some other must-do task?

        There is no one right answer. In many cases (the SDK is for a wide-spread product used by many) we cross our fingers and hope someone is thinking about writing a good book. In most cases, the writing team, which consists of the full-time lead and the rest contractors, dissapates.

        And here is another tack. Say the task is to connect to a new database through a data connnection. Why would you want to do that? Duh. If I were teaching a module that included that task, I would expect the students to learn more using the Socratic approach. I would as them “why would you want to do that?” And I would be prepared to learn myself.

        It’s like learning trig. You don’t understand it until calculus. Learning why makes no sense until you learn how.

        doug

      • #3212279

        Training And Documentation Needs To Teach How As Well As Why

        by musicavalanche ·

        In reply to Training And Documentation Needs To Teach How As Well As Why

        In both philosophy and computer design, why always precedes how. Thought dictates form. If you understand why, the how will always take care of itself.

      • #3211937

        Training And Documentation Needs To Teach How As Well As Why

        by jstaub ·

        In reply to Training And Documentation Needs To Teach How As Well As Why

        What about “user” documentation?   I would think that the users might want to know “why” they are doing a certain thing a certain way.   In my 20 yrs experience, it’s up to IT to tell them.   So, if we want why in our documentation, then we should be providing it in the documentation we provide. 

    • #3167483

      Information Overload: Introduction

      by justin james ·

      In reply to Critical Thinking

      This is the introduction to a two part series.

      Bill Gates recently wrote about the overload of information that people face in one of Microsoft’s June 27th 2006 Executive Email (“Bill Gates on the Unified Communications Revolution“). What Mr. Gates understands, and most developers do not seem to understand, is that the proliferation of various electronic communications methods, while resulting in an increase of speed of communications, has a point of declining returns to the point where productivity is harmed. Part of the problem is on the users’ lack of bit literacy. And much larger share of the blame falls on the shoulders of programmers and companies putting out software and devices which refuse to interact with anything else. While there are legitimate technical and business reasons for creating these isolated devices and systems, the solutions end up only addressing the users’ how and rarely their why.

      This series of articles will examine the different types of information overload, and what developers can do to address the problems.

      PART 1 ? Too Many Options

      Too Many “Inboxes”

      Users receive messages through a ton of different methods. To make matters worse, many of these methods require manual intervention. Once data is on the wire, it should find its way to the best destination based upon the receiver’s preferences and availability. For example, if I am away from my desk, email should come to my cell phone (and automatically synced with my desktop mail client) and MSN or Skype VoIP contact attempts should ring my cell phone. Once I return to my desk, they should come to my desktop, and through a central piece of software. The average user now is running an email program, checking Webmail periodically, has multiple email clients, three or four phone numbers, and even more inboxes, when they really just need one.

      Too Many Outboxes

      There are far too many ways to communicate with people. For most of the people in my network, I have the option of using the phone, IM, and email. Some users add in various Web forums, walled garden outboxes, faxes, and even more ways of sending a message. What users really need and want is one method of sending voice, text, and data from their desk, and another way of doing it mobile, with the best outbound path being chosen based upon availability of the data path and the receiver’s online status.

      Too Many Sign-ons

      We all have way too many logins to manage. Over the course of a day, most people have three voicemail passwords (home, work, and cell phones), 2 or more IM usernames, a webmail username for personal email, a work login (and way too many companies have different logins for each system as opposed to just all using the same username/password database), and so on and so on. It is simply unmanageable. A number of years ago, Microsoft pushed Passport, and ABM pushed the Liberty Alliance. The only place I see Passport is on Micrsoft properties, and I do not even know who uses Liberty Alliance. Because of username/password overload, users do not access all of the routes that belong to them.

      PART 2 ? Data Difficulties

      Walled Gardens

      Far too many content and communications systems are walled gardens. What this means is that while working within the system is great, it has no way of communicating with the outside world. MySpace is the biggest example of this. The blogs you post there cannot be accessed via RSS, the songs, videos, and images you post cannot be accessed outside of their system through URLs, and you are unable to send messages outside (or into) MySpace. In a nutshell, it is like having a cell phone that is unable to call any phone except other cell phones with that provider. What this does is force users to spend far too much time accessing and using far too many devices and systems in order to work with everyone in their network.

      Lack of Storage Unification

      Even for systems that are not walled gardens, there is a complete lack of storage unification. For example, a message sent through Yahoo! Mail does not appear in my Outlook “Sent Items” folder. A file stored in Flikr is completely cut off from my local file system; a change in one requires manual intervention on the other. As a result, users end up with a large number of places for data to be stored and hidden. Users need less storage areas, not more, that seamlessly sync and interact with each other. Even more importantly, simple data such as basic word processor documents need an easy way to appear on Web sites.

      The Metadata Problem

      Metadata is great. Not only does it provide useful information, but it is a great aid in the data searching process. Unfortunately, very few systems automatically generate metadata. While they offer systems for the manual creation of metadata, rare indeed is the user who tags, marks up, or enters metadata. More systems need to find ways of properly providing the metadata automagically.

      Poor Search and Lookup

      Most search systems offer simple text search, and sometimes allow the user to refine their search based upon metadata. This is just not enough. For example, if you remember that someone emailed you their phone number, which is better? Searching for all emails from that user, or being able to use a pre-defined regular expression (such as “find all emails from John that contain a phone number”)? More systems need to support regular expressions in a way that the average user can understand, and pre-define appropriate searches. In addition, more systems need to search through automagically generated metadata.

      Data Format Chaos

      Right now, all of our data is in far too many formats. Format incompatibility and conflicts are the kinds of issues that users simply do not care about, only vendors. The user does not care if they are using ODF or OpenXML, they only notice the format if it gives them a problem. Documents need to be able to be seamlessly shifted to the Web or from the Web to an environment where they can be edited. Vendors need to learn that the value they add to the users does not come from a file format, but comes from their business logic. Furthermore, as code increasingly gets developed within managed code with massive standard libraries (Java, .Net) or uses common open source libraries, the urge to develop custom file formats is reduced.

      J.Ja

    • #3209869

      Information Overload – Part I: Too Many Options

      by justin james ·

      In reply to Critical Thinking

      This is the first of a two part series.

      Information overload is a condition that many, if not most users face on a daily basis. Email, phone calls, instant messages; this list is endless. While end users are generally not very skilled in bit literacy, developers shoulder much of the blame and it is up to the developer to make sure that their applications and devices actually help the user, not hinder the user. In this initial part of my “Information Overload” series, I will show some examples of how developers can help resolve this problem.

      Too Many “Inboxes”

      Users receive messages through a ton of different methods. To make matters worse, many of these methods require manual intervention. Once data is on the wire, it should find its way to the best destination based upon the receiver’s preferences and availability. For example, if I am away from my desk, email should come to my cell phone (and automatically synced with my desktop mail client) and MSN or Skype VoIP contact attempts should ring my cell phone. Once I return to my desk, they should come to my desktop, and through a central piece of software. The average user now is running an email program, checking Webmail periodically, has multiple email clients, three or four phone numbers, and even more inboxes, when they really just need one.

      The easiest way to “deal” with this problem is to simply ignore it. It seems like every developer imagines that their application will become the users’ new “home” throughout the day. This is an example of “developer hubris.” Chances are, your user will not “live” in your application. In reality, your users have Outlook (or some equivalent) open all day long, and an IM application or too. The last thing your users need is yet one more application sitting in the system tray, popping up “piece of toast” every thirty seconds demanding attention or notifying them of a new message. On the other hand, you do not want them to only check your application once every few hours to see if they need to do anything in it.

      The best way to resolve this problem is to acknowledge that your software is probably not the most important thing in the users’ lives, and to work with their existing systems. Instead of delivering messages and notifications to your own data store, why not deliver it to Outlook? If the users do not use Outlook, send the messages via RSS or email. This way, the users do not need to monitor yet one more application. In addition, the users are already familiar with their existing applications, and can use that application to organize your information in the manner that suits them best. Another advantage is that it is actually less work to do this for you. If you have to choose between reinventing the wheel (data storage, searching, organizational capabilities, filtering, etc.) and simply working to deliver the information via a standard delivery mechanism, which would you rather do?

      Too Many Outboxes

      There are far too many ways to communicate with people. For most of the people in my network, I have the option of using the phone, IM, and email. Some users add in various Web forums, walled garden outboxes, faxes, and even more ways of sending a message. What users really need and want is one method of sending voice, text, and data from their desk, and another way of doing it mobile, with the best outbound path being chosen based upon availability of the data path and the receiver’s online status.

      This is really the same problem as the “Too Many Inboxes” issue, just in reverse. Why not offer the users a way to communicate from their existing application, using your application as a pass through? For example, a document management (or online content management system) that presents itself as a Word plug-in is a heck of a lot easier for a user to use regularly than a system where the user needs to take active action. A system that lets a user work with it from within Outlook to perform online scheduling (or send an email to a special email address) is going to get a lot more use than something that lives in its own space.

      The vast majority of the overwhelmed are using Microsoft Office. Modern versions of Office are capable of being pluged-into via COM. The .Net Framework makes it extremely easy and attractive to target an Office application as the wrapper for your code. There is a reason why so many users use Excel and Word to do things that they “shouldn’t” do; these applications are familiar and comfortable to them. Instead of writing yet another application that gets dusty and grows mold through disuse, why not leverage the users’ existing comfort zones and knowledge base and work within Office?

      Too Many Sign-ons

      We all have way too many logins to manage. Over the course of a day, most people have three voicemail passwords (home, work, and cell phones), 2 or more IM usernames, a webmail username for personal email, a work login (and way too many companies have different logins for each system as opposed to just all using the same username/password database), and so on and so on. It is simply unmanageable. A number of years ago, Microsoft pushed Passport, and ABM pushed the Liberty Alliance. The only place I see Passport is on Microsoft properties, and I do not even know who uses Liberty Alliance. Because of username/password overload, users do not access all of the routes that belong to them.

      Passport seems to involve giving Microsoft money, and who knows what the case is with Liberty Alliance? One alternative is to leverage the users’ existing authentication scheme, whether it be LDAP, Active Directory, or something else. The Higgins Trust Framework seems like a great idea as well (essentially on open source version of Passport or Liberty Alliance), and I truly hope it goes somewhere. Unfortunately, most services will not let you authenticate against their servers, at least not the common ones. For example, Yahoo! will not let you authenticate against a Yahoo! ID without some sort of partnership. I think this is a real shame. Most people have a Hotmail or Yahoo! username (Google is pushing their users to get Google IDs as well, but their market penetration is startlingly low in non-search market segments).

      However, systems designed for internal usage have no excuse whatsoever. If you are writing a client/server application, you should not ever need to ask for the user to authenticate, unless they need to escalate their login to a more privileged level. Except in a few very rare occasions, the fact that a user is logged into their desktop should be good enough for your application. Reduce the logins, and increase the happiness.

      J.Ja

      • #3277602

        Information Overload – Part I: Too Many Options

        by asparagus201 ·

        In reply to Information Overload – Part I: Too Many Options

        From a lurker not working for Microsoft, Passport is and always was free. That it failed to catch on and is in the process of re-branding as a Live ID, which may or may not help, is problematic. As for Liberty Alliance, I don’t even recognize that name. The idea of a single ID was mounted by AOL as a wallet as well as the two you mentioned. Why it didn’t fly is beyond me.

        I should also note I have separate IDs to CNET for their Forums. Not by choice or to avoid sales efforts which may be reasonable causes for some, but through the vagaries of web development and interlinked partnerships, I am guessing.

        To comment here I had to login (actually, I used an yahoo email when setting up this Forum account – to be a throw-away — the email newsletter comes to an hotmail address which the TechRepublic is happy to email me at but won’t allow me to login with – DUH.) So I heartily agree with what you say though what you (here read TechRepublic) do is inconsistent and annoying.

      • #3277551

        Information Overload – Part I: Too Many Options

        by mikemu ·

        In reply to Information Overload – Part I: Too Many Options

        Here is my obnoxious list of rhetorical questions…

        If I dont own a PC with Microsoft XP/Office XP and up? Am I going to be in the dark/left in the cold?

        Do I really need MS Office? Do I need to read the reams of online documentation just to figure out how to temper the automatic style updating/reformatting stuff? How many checkboxes should my work processor have in the configuration options?

         

      • #3277489

        Information Overload – Part I: Too Many Options

        by wearsmanyhats ·

        In reply to Information Overload – Part I: Too Many Options

        I think these are all valid points and I think also that the computer industry as a whole is blind to them. That is the problem and I’m glad someone has decided to state what, to many outside of IT, are obvious points. Bringing this to the attention of Developers is a great way to start.

        However, I take acception to the following quote and the paragraph after:

        “However, systems designed for internal usage have no excuse whatsoever.”

        I work in healthcare and this last paragraph would get much hearty laughter until everyone realized that the speaker/writer was not joking. Perhaps this was the “rare” occasion mentioned but I find healthcare security is becoming much more ubiquitous nowadays. A reliable, cheap form of single sign-on and authentication would be welcome by computer users everywhere.

        The Liberty Alliance is Sun Microsystems, and other organizations, attempt at an open-source authentication system. Just because one hasn’t heard of something doesn’t mean that it can be dismissed (as one commenter suggested.).

      • #3277463

        Information Overload – Part I: Too Many Options

        by justin james ·

        In reply to Information Overload – Part I: Too Many Options

        WearsManyHats –

        “I work in healthcare and this last paragraph would get much hearty laughter until everyone realized that the speaker/writer was not joking. Perhaps this was the “rare” occasion mentioned but I find healthcare security is becoming much more ubiquitous nowadays. A reliable, cheap form of single sign-on and authentication would be welcome by computer users everywhere.”

        Your segment of healthcare may indeed be the exception (I hope for your sake it is!). I deal with pharmeceutical companies all day long as my clients. Some of them do indeed “have it together” for single logins. Other do not. One of my customers recently provided me with a laptop to access their VPN. It requires a VPN token that attaches via USB, and a login. VPN login is not the Active Directory login, they are completely seperate. In some systems, I use the Active Directory login, in other cases I use my ID number from HR (all people, regardless of of whether or not they are employees or contractors get an HR ID). the HR ID is harder to remember than a Social Security number, and the username involves numbers. Their internal websites cannot seem to agree which identifier to use. And so on.

        I wish I could say that healthcare “gets it” but I think it is indeed a matter of segment and/or company. This “problem company” is not some small boutique drug researcher either, they are one of the Top 5 pharmeceutical companies in the world. Indeed, maybe that is their problem, too big to get IT done.

        J.Ja

      • #3279029

        Information Overload – Part I: Too Many Options

        by gsosa70 ·

        In reply to Information Overload – Part I: Too Many Options

        Check out? http://www.avatier.com? They have a cool password sync solution great for organizations with one or more directory (AD, LDAP, etc). Pretty slick.

    • #3211205

      Information Overload – Part II: Data Difficulties

      by justin james ·

      In reply to Critical Thinking

      This is the second in a two part series.

      In the first part of this series ("Information Overload – Part I: Too Many Options"), I discussed how developers create too many options and data paths for users, creating a situation information overload. In this part, I will review how issues with the data itself contribute to the problem, and how developers can help resolve it.

      Walled Gardens

      Far too many content and communications systems are walled gardens. What this means is that while working within the system is great, it has no way of communicating with the outside world. MySpace is the biggest example of this. The blogs you post there cannot be accessed via RSS, the songs, videos, and images you post cannot be accessed outside of their system through URLs, and you are unable to send messages outside (or into) MySpace. In a nutshell, it is like having a cell phone that is unable to call any phone except other cell phones with that provider. What this does is force users to spend far too much time accessing and using far too many devices and systems in order to work with everyone in their network.

      This is a bad case of developer hubris. Like the issues in Part I, too many developers seem to have the idea that their users will "live" in their application, on their device, or on their Web site. This is completely false. MySpace and Outlook are the only "homes" for people. People really do not need another. Outlook is hardly a "walled garden." MySpace sadly is.

      The rational behind a "walled garden" is that you want your users to spend as much time there as possible. It is a problem particular to Web sites that earn money via advertising. MySpace is the perfect example, as stated earlier. If you know enough people on MySpace, you have no almost no need for an email account, a Web site, an RSS reader, or many other applications (or Web applications). However, this comes at a price: you are cut off from anyone not on MySpace.

      Users really do not like "walled gardens," they only tolerate them, and only if there is enough in the garden to be satisfactory. As soon as a more open or more attractive option becomes available, the users flee. Look at the failure of every single portal in the late 90’s. AOL is another good example of what happens to "walled gardens." Its closed system could not survive the Internet.

      At the end of the day, try to avoid creating a "walled garden." Sometimes business dictates that you must, but whenever possible, hook into known, open standards such as SMTP, NNTP, POP3, LDAP, IMAP, etc. The more you allow your system to interact with others, the more likely you are to gain and retain users.

      Lack of Storage Unification

      Even for systems that are not walled gardens, there is a complete lack of storage unification. For example, a message sent through Yahoo! Mail does not appear in my Outlook "Sent Items" folder. A file stored in Flikr is completely cut off from my local file system; a change in one requires manual intervention on the other. As a result, users end up with a large number of places for data to be stored and hidden. Users need less storage areas, not more, that seamlessly sync and interact with each other. Even more importantly, simple data such as basic word processor documents need an easy way to appear on Web sites.

      Where systems and users are headed towards is a new version of thin client computing, but unlike traditional thin client computing, the data is stored all over the place, not on a central server. Because of this, each system has its own rules for data storage and retrieval, and there are higher barriers between them. In the thin client days (as well as the client/server days), you could count on each piece of software having access to the same data, either through the file system or a database connection. Now, with the data locked behind a registration screen and no standard APIs for over-the-Web authentication (do you use HTTP authentication headers? Pass usernames/passwords through GET or POST variables? What are the fields called?), it is much harder for various systems to read or write to each other. At best, one site will provide you with an HTML snippet (YouTube style) to post your content from one site onto another site.

      Of course, this is really not different from what the Web was originally conceived to do: allow data from many different sources be put together into one place. But there is now a business model attached to it. A company cannot make a living if you link directly to an image or movie on their site; they need to wrap it in their packaging. Again, look at YouTube. They could provide you with code to allow their movies to appear in the user’s media player of choice within the page. Instead, they put a wrapper around it that encourages you to visit their site. In the future, will they be splicing their ads into it as well? Google Maps followed this path. They waited until a critical mass of Web sites were using their API, then suddenly started plastering their ads on those maps.

      This would not be as much of a problem, except that data is now scattered all over the Web, with different logins and different methods of retrieving or updating it. This leads back to "too many options." Even with a self-contained system, like Salesforce.com, the result is chaos. Some data is within the SOA provider’s cloud, some of it is stored within the LAN. Getting applications (and users) to work with two separate pieces of data like this is not easy at all.

      Unfortunately, there is no easy fix for this at this time. The only hope is for a set of standards to be developed which help resolve this situation. Anything short of being able to access a piece of data just like network storage is not good enough.

      The Metadata Problem

      Metadata is great. Not only does it provide useful information, but it is a great aid in the data searching process. Unfortunately, very few systems automatically generate metadata. While they offer systems for the manual creation of metadata, rare indeed is the user who tags, marks up, or enters metadata. More systems need to find ways of properly providing the metadata automagically.

      You, as a developer, need to find a way to determine the applicable metadata within your user’s data, and make it available easily and openly. Plain text search just is not good enough either. There needs to be relevant, searchable metadata. It is a shame that WinFS is dead; I loved the idea. In general, file systems do not store any truly relevant metadata.

      Poor Search and Lookup

      Most search systems offer simple text search, and sometimes allow the user to refine their search based upon metadata. This is just not enough. For example, if you remember that someone emailed you their phone number, which is better? Searching for all emails from that user, or being able to use a pre-defined regular expression (such as "find all emails from John that contain a phone number")? More systems need to support regular expressions in a way that the average user can understand, and pre-define appropriate searches. In addition, more systems need to search through automagically generated metadata.

      This is a tall order. One thing I would think would be helpful is if languages (or regex libraries) had some standard regex’s built in. It would also be great if SQL supported regex’s, for those applications that query against databases. Languages also need to do a better job at supporting regex’s. OOP Languages in particular make regex’s too much work. I like that Perl brought regex’s to the level of string operators. One reason why Perl is "so good" at regex’s compared to other languages (despite other languages typically using Perl-like regex syntax) is because a regex is a string operator.

      That aside, if you want to support truly useful search/replace in your software, not only do you need to provide regex support, but it has to be done in a user friendly way. Pre-built regex’s (email addresses, phone numbers, mailing addresses, other relevant data) helps, but the syntax itself needs to be friendlier too, maybe something along the lines of Office string formatting codes.

      Data Format Chaos

      Right now, all of our data is in far too many formats. Format incompatibility and conflicts are the kinds of issues that users simply do not care about, only vendors. The user does not care if they are using ODF or OpenXML, they only notice the format if it gives them a problem. Documents need to be able to be seamlessly shifted to the Web or from the Web to an environment where they can be edited. Vendors need to learn that the value they add to the users does not come from a file format, but comes from their business logic. Furthermore, as code increasingly gets developed within managed code with massive standard libraries (Java, .Net) or uses common open source libraries, the urge to develop custom file formats is reduced.

      Pick a common file format, whether it is ODF, HTML, XML, OpenXML, CSV, or whatever is applicable to your data needs, and support it natively and seamlessly. This file format should be one that other applications in the same market, as well as related markets support. A good example of a data format that "refuses to die" is the dBase format. Despite the fact that no one has used dBase itself in I do not know how long, the format still is around because dBase was popular enough that everyone allowed importing and exporting from it. As a result, everyone retains that dBase compatibility, and the format itself has developed a second life as a common database data sharing format. Maybe if OpenXML is as open as ODF (or nearly as open) it will achieve the same status. HTML is at that point already, but unfortunately it is not well suited for data transportation, only data display. XML in and of itself is not a very good format for data exchange, because it different vendors do not use the same schema (or interpret the schema in different ways) it becomes useless. XML really relies upon certain standards (such as RSS) to become a useful format.

      If you can, get together with other vendors to work out a common data format. Doing business based upon data format lock-in, as opposed to competing on actual feature sets and performance hurts the users. Look at the trouble Microsoft has had over the years because of their various attempts to lock users in based on formats. It does nothing but generate bad press and bad feelings. Reducing the data format mess is a joint effort, and you cannot do it alone unless you are working in a well established market with well established standards.

      J.Ja

    • #3277513

      Running Water Web Design

      by justin james ·

      In reply to Critical Thinking

      I think that it is time to fully explain my philosophy of Web site design and development. I call this theory “running water Web design,” because it follows the idea of running water: smooth flowing, taking the path of least resistance, and simplicity. This philosophy does not force anything upon the user, adapts to their needs and environment, and degrades gracefully. It makes Web sites that endure the inevitable change, while delivering your content to the end user in a manner that makes it as easy as possible for the end user to consume it, with an emphasis on usability.

      To distill the running water Web design philosophy into one simple sentence: make no assumptions about your user, their technical savvy, or their technical capabilities, while at the same time never forcing them to do things your way instead of their way.

      The first step in adopting this philosophy is to simply give up on the idea of being able to precisely control the way your Web site appears to the end users. Once you surrender to the reality that users have a wide variety of screen resolutions, operating systems, color depths, and needs, you give yourself the freedom to design great looking, highly usable Web sites.

      Use CSS as much as possible, but only use CSS that all browsers will accept and only use CSS that the site will properly work without. CSS is great because it allows you to separate your presentation as much as possible. This also translates to cleaner server side code, because design changes require minimal editing of templates or of code that generates HTML. But CSS has a dangerous side as well, which is that despite the perpetual call for HTML standard compliance, different browsers interpret much of it differently, or have their own specific extensions to it. This means that you should be very careful with your CSS. Unless you are designing an intranet, where you are guaranteed that users have a particular browser with particular settings, you cannot count on your CSS working right if you rely upon a particular browser’s interpretation of it. Some users will disable CSS entirely, or have their own browser settings establish that override or ignore your CSS, so make sure that your site is still usable without the CSS as well.

      All font sizing should be proportional; any non-standard sizes (such as “smaller”) should be percentages of the base size, and the base size should be set in ems, not pixels or points, in order to ensure correct cross platform sizing. Different platforms interpret “pixel” differently, in terms of onscreen DPI. The end result is that a font defined as “14 px” will appear differently on different screens (Mac OSX is one example of an OS that does not follow the norm). Font points are also slightly different from OS to OS, and should be avoided as well. Ems are the only unit of measurement that you can count on being the same on every user’s system. Never forget: anything less than the equivalent of 10 point font is going to be unreadable to a substantial portion of users. If you find yourself tempted to use a small font for whatever reason, take a moment to consider if losing the page views of people with less than perfect vision is worth preserving your design.

      Using fixed width table or page sizing is acceptable, as long as at least one portion is set to a width of 100% in order to consume the user’s window, as well as to scale down properly if their window shrinks. One of the biggest frustrations I have with Web sites is seeing a site where the designer made sure that it worked great at 800 x 600 resolution without allowing it to easily scale upwards for users with higher resolutions. The end result is a design where users at higher resolutions see content floating in a sea of background color or tiled images, causing unnecessary vertical scrolling. Even worse, to compensate for these horrible fixed width designs, the designer often forces a font size that is nearly unreadable to those with less than perfect vision. Your user picked a specific size for their browser window on purpose. They have a particular display resolution for a reason. Forcing the user to scroll unnecessarily will lead them to subconsciously hate your site.

      Any design element must be wholly optionally or changeable according to the users’ web browser settings without breaking the site or rendering it useless. A great example of this is link color. Over the years, the user has become accustomed to the default link colors of their browser. Trying to change this or using non-default link colors will cause the user to have a more difficult time finding the links. Even worse, if your background is similar or the same as the user’s standard link colors and the user has forced their browser to use the default link colors no matter what, the user will not see your links at all. Font size is also in this category. Modern Web browsers all the user to force their own font sizes onto any site. If this completely breaks your design, your design needs to change.

      Eliminate the use of any non-HTML compliant tags or code. Do not use HTML compliant tags that major browsers interpret in a radically different way that will break your site design. Again, our goal is to sure that anyone interested in your content is able to consume your content. Any design element which does not accomplish this goal is a waste of time and makes QA a nightmare.

      Do not use fancy code that will not work well for users with screen readers as well as content or navigation that requires user input (including mouse gestures) to be visible. Not only is writing this kind of code a lot of additional work, QA testing on it is a royal pain in the neck. Why make your life hard, as well as turning users off to your site for some drop down menu or flying widget? Many users (if not most users) instinctively position their mouse on the vertical scrollbar while reading your site. If the content or navigation requires any kind of input to be visible, many or most users will not see it. This especially applies to links; if the link is not noticeable without a gesture or input from the user, you can bet that you are going to lose a ton of page views.

      If user input other than a single left click is required to view or activate content, assume that the user and search engines will not see it or be able to access it. Any content displayed in such a way must not be critical or important information. Ant interface behavior that operates in a desktop application manner as opposed to a Web browser manner is not good either. I know, I have been hammering this point quite hard lately. There is an excellent reason for this. Any user activity other than a single mouse click, or typing into clearly defined text boxes is a total failure of “the Mom test.” Users do not do experiment with interfaces, users do not instinctively know that things can be dragged/dropped or right clicked or whatever. To have your design require this type of input (even if there are clear directions) renders your site useless to many users. For example, if clicking on a column header sorts the contents, also provide a drop down list for sorting. Yes, some users know that when the column header is linked that it typically means that clicking it will sort the table on the column. But a huge portion of users don’t know this. When your user opens a Web browser, they expect anything within it to act like a Web site, not Excel or Outlook or whatever application you think you are replicating, regardless of how similar to the original it looks like.

      Any client side scripting (including, and most importantly input validation) must be replicated server side as well, in order to allow “tightened down” browser installations, downscale installations, and automated site usage (search engines, screen scrapers, etc.) to properly use the site. Malicious users should not be able to circumvent your security by disabling client side scripting. This is an easy trap to fall into; client side scripting, especially input validation is friendlier to the user. Why have a round trip to the server just to tell them, “Sorry, your chosen password does not meet or strength requirements?” On the other hand, the moment you assume that the user has client side scripting enabled or that your client side scripting worked properly on their browser, you are in deep trouble. Many users disable client side scripting, or restrict its use to pre-approved Web sites. Other users may have a browser with a different implementation or interpretation of your client side scripting. And one of the first thing a malicious user will do is try your website without client side scripting, to see if they will be able to pass bad or dangerous values to your backend server. The last thing you need is to have assumed that everything was handled client side, and not re-validate and error trap on the server side.

      All images must have ALT text as well as sizing information to ensure proper display. Never assume that the user is able to view the images. You want your users to be able to start consuming your content immediately, not to have to wait for images to download. Users on slow connections may even be done with that particular page view before the images finish downloading. Proper usage of image sizes help the browser properly render the page before the images download and help the browser render them properly (IE, for example, has a bad habit of resizing images as it sees fit). ALT tags make the images useful to the user before they download, as well as provide context to a search engine indexing the site. There is no reason to leave out these critical, but simple elements of HTML design.

      Whenever possible, allow the user’s browser setting to override your own settings without breaking the site. I have already mentioned this numerous times. Whenever you assume that whatever you have specified the site to look like will be what the user sees, you are going to be in trouble. There will always be users out there who override your link colors, font sizes, window sizes, and so on with their own settings. If your site breaks under these conditions, you will be in trouble. If your navigation consists of transparent images with white text, and you set a dark image to be the background, and the user disables background images, they will not see your navigation. If you use the default link colors for a background, and specify your own link colors, and the user forces the default link colors, your site will not be usable by those users. If your site relies on a particular font size to not overflow itself, and the user forces a particular font size for readability, you are going to be in trouble. Making no assumptions about the user or their browser settings will help you have a grateful audience that is delighted to use your site.

      Aim your code towards the current least common denominators: screen resolutions, color depths, HTML levels, plug-ins, etc. If your site requires 1024 x 768 resolution as a minimum or 16 bit color or a plug-in or a JVM or something else that may not be installed, turned on, available, or set, you are limiting who can view your site. Why turn away visitors just because their setup or computer does not meet your definition of how a computer should be set up?

      Never, ever use JavaScript to activate a page view, unless yourAJAX (or similarly coded) application requires it. This completely breaks the user’s ability to use their browser’s tabbed browsing or opening a link into a new window. It also makes the link invisible to a search engine. If you need to open a link in a new window, use “target=_blank” not JavaScript. If you want to try to force that window to be a certain way (no scrollbars, a certain size, etc.) use JavaScript on that new page to do this, do not try to do it within the page link.

      Stick to major navigation being placed on the top or left as much as possible. The top and the left (particularly the top left corner of the page) portions of the page are what the users’ eyes are drawn to upon the initial page view, and the user spends more time looking there than anywhere else on the screen. If you put your navigation there, the users will be able to quickly and easily navigate your site. Another fact to consider, is that these locations are where users are accustomed to looking for navigation elements, since most Web sites put the navigation there. If you put expected design elements (navigation, site search, “email the Webmaster” link, etc.) where the user is used to seeing them, your users will be able to use your site from the get go. No one likes having to learn to use a site, and that significantly damages usability. You want your users to remember your site as “the one with good prices that I had no problems with,” not “the one with great prices, but I could not figure out how to add items to my cart.”

      Remember that the top left corner is the most important place on the screen; the bottom right corner is the least important. This means that whatever you feel is the most important part of your site design, whether it be the logo for branding, a site search, navigation, or something else, needs to go here. There is no way to change this fact. If you put the site search at the bottom right corner, no one will use it.

      All links must obviously be links. This is something else that has been changing since the introduction of CSS. For whatever reason, many Web designers feel that making links look less “ugly” by removing the underlines and/or making them look like the rest of the text is a Good Thing. I suppose on an aesthetic level, they are correct on occasion. In terms of usability, they are dead wrong. If you want to drive users away ASAP, make it impossible for them to find the links they want to follow by making them blend into the rest of the

      Offer a “printer friendly” link on every page. This is optional, but as highly recommended as can be. When a Web page is printed, none of the design elements are useful to the person reading the hard copy. They cannot click the links, the styles are useless, frequently it is being printed on a black and white printer (or in black and white mode) and so on. All the user wants is the content. Give the user what they want, and provide a “printer friendly” link that provides the content, plain and simple, in a format and font suitable for printing as opposed to viewing on a screen. If the content is split over multiple page views, the printer friendly version should provide all of the pages as one

      URLs should be browser bookmark, email, and search engine friendly. If they are not, the server should be able to determine the correct content to display to the user. If you are passing really long URLs around that may break in email clients, or may get mangled if someone tries to post them to a newsgroup or Web forum, you are missing out on potential page views. If your URLs contain a session ID, you are giving your users a lousy experience, especially if they send “?product_id=50&session_id=541039ASD” to a friend in an URL, and the friend gets a giant warning message about an expired session. Do you really think that the average user is able to correct the URL to see the product? Of course, you also want search engines to spider our site nicely, and present clean URLs to the user. Ugly, long, session specific URLs may make life a touch easier on the backend, but they are going to kill you on the front end. And accepting the session ID from the URL is a dangerous habit, as it allows a malicious user to participate in other user’s session with enough automated guessing or reverse engineering of the ID creation system.

      Avoid splitting content over multiple pages. Each page should be a discrete item, and not require additional page views to finish reading. If multiple page views are needed for whatever reason, usually due to an ad-driven business model where page views drive revenue, a “print friendly” link must display all pages as one. This serves many purposes. First of all, when someone finds the page through a search engine, and they land on page three out of seven, they may be rather confused, and they may not be able to find their way back to Page 1. Users really do not like having to go through multiple pages. In addition, compressing the content into one page increases its visibility to search engines. And search engines like to only display one or two pages for a site; would you prefer that the users see three separate pieces of content that might help them, or just three portions of the same piece of content? Remember, each time you require your users to use the navigation system, regardless of how well designed it is, you are introducing a potential difficulty or interface failure. Keeping your content on one page minimizes the points of potential failure for your users.

      Never break the user’s “Back” button or other elements of their Web browser’s functionality. If you are using something other than standard anchor tags for navigation, you are doing something wrong. By breaking the “Back” button and other basic functions of the user’s browser, you are damaging your usability. The user always knows where their browser’s “Back” buttons are; they may not always find the “Back” button that you coded onto the site. The same applies to “Close Window” buttons. If you have a procedure which will cause something bad to happen (such as double charging for a checkout) by a page refresh or form re-submission, recode your backend logic. A simple disclaimer of “refreshing this page will cause you to be charged twice” just does not cut it. These are functions that the user has probably spent years learning how to use as they need to. Breaking them is like building a highway that requires drivers to drive on the opposite side of the road, it just does not work and will cause major problems.

      Never alter the browser window’s appearance (scrollbar color, title bar color, etc.). This has become very, very common. Between CSS to control appearance, and JavaScript to change the browser’s window display, Web designers and developers have gone nuts. For some reason, they seem to think that their particular design is more important than the user’s settings. I have been to sites where the scrollbar was forced to be nearly all the same shade of black, including the arrows. Needless to say, scrolling to see content would have been nearly impossible without a wheel mouse or knowing that the keyboard also scrolls the page (many users think that the keyboard is solely for entering input!). The user has a particular color scheme and window layout on their computer for a reason. Some users may require a high-contrast color scheme for visibility reasons. Others may have spent money or effort on a GUI skinning system to fit their own aesthetic tastes. There are a million reasons why the user’s browser window looks the way it does. So why break it? Doing so does not help the user, it just confuses them. If your Web site changes their browser to no longer look familiar, you have forced the user to relearn how to use their browser, just for your site. Remember, the goal is to work within the user’s environment, not to make the user learn your environment!

      To summarize the running water Web design philosophy into a simple checklist:

      * Use CSS as much as possible, but only use CSS that all browsers will accept and only use CSS that the site will properly work without.
      * All font sizing should be proportional; any non-standard sizes (such as “smaller”) should be percentages of the base size, and the base size should be set in ems, not pixels or points, in order to ensure correct cross platform sizing.
      * Using fixed width table or page sizing is acceptable, as long as at least one portion is set to a width of 100% in order to consume the user’s window, as well as to scale down properly if their window shrinks.
      * Any design element must be wholly optionally or changeable according to the users’ web browser settings without breaking the site or rendering it useless.
      * Eliminate the use of any non-HTML compliant tags or code. Do not use HTML compliant tags that major browsers interpret in a radically different way that will break your site design.
      * Do not use fancy code that will not work well for users with screen readers as well as content or navigation that requires user input (including mouse gestures) to be visible.
      * If user input other than a single left click is required to view or activate content, assume that the user and search engines will not see it or be able to access it. Any content displayed in such a way must not be critical or important information.
      * Any client side scripting (including, and most importantly input validation) must be replicated server side as well, in order to allow “tightened down” browser installations, downscale installations, and automated site usage (search engines, screen scrapers, etc.) to properly use the site. Malicious users should not be able to circumvent your security by disabling client side scripting.
      * All images must have ALT text as well as sizing information to ensure proper display. Never assume that the user is able to view the images.
      * Whenever possible, allow the user’s browser setting to override your own settings without breaking the site.
      * Aim your code towards the current least common denominators: screen resolutions, color depths, HTML levels, plug-ins, etc.
      * Never, ever use JavaScript to activate a page view.
      * Stick to major navigation being placed on the top or left as much as possible.
      * Remember that the top left corner is the most important place on the screen; the bottom right corner is the least important.
      * All links must obviously be links.
      * Offer a “printer friendly” link on every page.
      * URLs should be browser bookmark, email, and search engine friendly. If they are not, the server should be able to determine the correct content to display to the user.
      * Avoid splitting content over multiple pages. Each page should be a discrete item, and not require additional page views to finish reading. If multiple page views are needed for whatever reason, a “print friendly” link must display all pages as one.
      * Never break the user’s “Back” button or other elements of their Web browser’s functionality.
      * Never alter the browser window’s appearance (scrollbar color, title bar color, etc.).

      J.Ja

      • #3277379

        Running Water Web Design

        by jaqui ·

        In reply to Running Water Web Design

        I agree whole heartedly with the principles you list in this entry.

        I have used 10 different browsers to view my site, which is 100% styled through CSS, and only those with no CSS support don’t display it exactly the same way.

        I have also tested my site at 640 by 480 resolution, using a friends system that has that as the default, the only real issue is the extra scrolling added by the smaller size, at even 800 by 600 there is only one actual page that requires scrolling.

        My site uses zero plugins, and zero clientside scripting. they are not needed for the purposes of the site.
        I personally don’t think clienside scripting, or plugins are needed for any content that is important. if a site is coded so that they are, goodbye, my money goes elsewhere.

      • #3278259

        Running Water Web Design

        by john ·

        In reply to Running Water Web Design

        I agree with what you have to say about coding the site
        Keep it simple has always worked for my clients
        and work on the principle that not everybody has high speed access (DSL)
        some people still use dialup(snailnet)
        and to test my sites i use Opera as it will emulate numerous versions and styles of browsers
        including text based browsing

      • #3278027

        Running Water Web Design

        by musicavalanche ·

        In reply to Running Water Web Design

        You have hit a solid home run yet again, J.Ja. You warrant a hearty raise, or at the very least, a book deal.

      • #3278002

        Running Water Web Design

        by dbmercer ·

        In reply to Running Water Web Design

        It’s a shame Tech Republic does not follow your advice.  I read your column in minuscule type with two very large grey columns on either side.  Had to do a lot of scrolling and squinting to read.

      • #3278548

        Running Water Web Design

        by mark johnson ·

        In reply to Running Water Web Design

        I agree with David Mercer – if it wasn’t for the fact that TechRepublic occassionally delivers pearls of wisdom like this article, I would have abandoned it precisely because it breaks so many of the rules advocated.

        What is it with only using 30% of my screen width to display information that I care about!

        PS. The pearls are becoming more rare.

      • #3208915

        Running Water Web Design

        by dr. tarr ·

        In reply to Running Water Web Design

        Wow.  Could you please forward this blog entry to who ever designed the TR site?  After I copied and pasted the entry onto Word so I could get the font large enough to read, I decided that I needed to have the web designer who maintains my site read the entry, maybe he will understand the way you wrote it, because he sure doesn’t seen to understand me!  Oops, there is that session ID you talked about in the URL I copied and pasted, If this site had a built in “forward this article” feature I could use that, I guess I could forward the word document I pasted to to get the font large enough to read, but I really don’t like doing that.  I guess I’ll send him the URL, and if he isn’t smart enough to figure it out its time for a new web designer.

        Oops, I just realized that the URL I was copying was the one from the comments editor page.  The actual blog page does not have a session ID.  My bad.

        As I type this I notice that the icons at the top ot the edit window are reloading every time I hit a key.  What’s up with that?!  I’m sure glad I have a really fast connection, that could get real annoying at 28.2.

         

        EDITED – Added Information

      • #3208793

        Running Water Web Design

        by jasonbunting ·

        In reply to Running Water Web Design

        Most of these are either common sense by now (we have been hearing them for years) or ridiculous in most cases. I don’t care if people want to force an arbitrary font size on my site – if they do and it looks like crap, oh well. There is so little ROI for most of this stuff that I don’t see myself ever having the luxury (?) of wasting my time on it – you want to view it at 800×600 and not scroll a bunch? Too bad. Printer-friendly on each page and worry about whether or not they disabled CSS?!?! 

      • #3208792

        Running Water Web Design

        by justin james ·

        In reply to Running Water Web Design

        Jason –

        If you honestly beleive that Web usability has a low ROI, I suggest you check out my post “You do NOT Need to Cut Off Limbs to See a 286% Sales Boost“. Yes, many of these are “common sense”. However, “common sense” is pretty rare, especially when the IT industry is in the grip of massive groupthink. It seems to me that the vast majority of designers are stuck on a few concepts which seem to “look cool” but in reality lower usability, and therefore kill profitability. If you are using a quality CMS system (they are also quite rare) or even just making an HTML template, it is not hard to follow these suggestions. In fact, my experience has been that it is easier to let a Web site flow and adjust to the user’s environment, than to attempt to force a particular layout in all cases. If you cannot have a Web design which does not require horizontal scrolling at 800×600, I suggest you re-evaluate your design. Web designers have been creating great looking designs which work quite well at 800×600 for well over a decade now.

        To other comments, thanks much for your feedback. Wesley, I know that the TR site has some issues. These are outside of my control, but I know that the TR staff is extremely responsive to user feedback. If you make your thoughts heard to “those that be”, and enough people do, I am sure they will at least take it into consideration. I too experience the “buzillion image downloads with each keystroke” problem, so I am forced to write blogs in Word and copy/paste to the blog, then strip Word’s bad HTML. That is not a good solution at all.

        J.Ja

    • #3277823

      You do NOT Need to Cut Off Limbs to See a 286% Sales Boost

      by justin james ·

      In reply to Critical Thinking

      Imagine that you could see the following improvements on your Web site, with a total of only four weeks of work:
      * 244% increase in conversion rate
      * 286% increase in overall sales
      Those numbers are from http://www.creativegood.com/casestudies/shure.html

      Here is another set of numbers to consider:
      * Sales/Conversion rate improvement: 100%
      * Traffic/visitor count improvement: 150%
      * User performance/productivity improvement: 161%
      * Use of specific (target) features improvement: 202%
      That set of numbers came from http://www.useit.com/alertbox/20030107.html

      Wow! If you are in charge of (or derive income from) a Web site or software development, how many fingers would you cut off for these levels of improvement? Well, there is good news for you: you do not have to lose any fingers. You just need to follow usability theory.

      The first set of numbers comes from Creative Good, a design consulting firm run by Mark Hurst after performing four weeks of site redesign. The second set of numbers comes from Jakob Nielsen’s Alertbox, as a result of a 135% increase in site usability. Both of these people have proven, decades long track records of significantly improving the usability of applications and Web sites. They are also the two main influences on my running water Web design philosophy.

      Mr. Hurst pays special attention to the user’s experience. Mr. Nielsen uses scientific methodologies (control groups, eye tracking, in depth statistical analysis, and so on) to choose amongst proposed designs. I would have to say that the numbers above (just a brief sampling of the numbers available on each of their Web sites) speaks for itself.

      As I have already written, usability is the killer feature. There is not a single site out there on the Web which is unique. Someone else is always offering the same products, content, or service for the same price you are, sometimes cheaper. Your Web site, product, content, and service is a commodity and users have choices. If you are first to market today, tomorrow you will be one of a dozen, and by next week you will be in a sea of hundreds. Google makes their living by stealing everyone else’s lunch by improving usability.

      Remember, you are not the end user. Your high priced design consultant is not the end user. The Internet elite pushing AJAX and Web 2.0 are the not majority of end users. My grandmother, your six year old daughter, the person who thinks that the CD-ROM drive is a coffee holder is the typical end user. Unless you are making a tool for use by programmers, asking your programming friends if something is usable is worse than worthless, it is harmful. Think about that. Re-read those numbers.

      I have never, ever spoken to an end user who said, “gee, I had a really hard time figuring out how to make a purchase from that site and I just gave up, but wow, it sure was pretty, I will be sure to attempt to shop there again.” I have never seen an AJAX or Web 2.0 proponent show hard evidence using testing methods that follow sound methodology prove that their techniques improve sales, conversion rates, traffic levels, hit counts, new user registrations, or any other bottom line number. I have never seen a consultant or graphics designer who specializes in confusing yet beautiful site design to offer a single number that shows that by reducing usability they improved the metrics that truly matter.

      Another interesting fact is that the rules of usability are fairly static. For example, initial efforts at 3D interfaces showed that they were just as useless in 2006 as they were in 1986. Improvements and changes in technology do not equate to a change in the users’ needs; they are merely different ways of addressing the users’ needs. TiVo did not suddenly create the desire for people to view movies whenever they wanted. It simply addressed the desire, and did it better than a traditional VCR did, which did it better than HBO did, which did it better than the movie theater did. The Internet did not create a demand for adult content. It simply did it better than? you can imagine the list. These are examples of how improvements in usability generate big bucks. If you do not believe me, compare the sales figures for iPods versus portable CD players.

      So yes, if you want to improve your Web site’s performance, by all means consider a redesign. But do not settle for the most attractive design or the design with the most features. Choose a design that will demonstratively improve you bottom line. That means listening labs, observations of users at work, log analysis comparing the new design being given to random users versus the current design, and so on. That means using scientific rigor. Even if you do not have a big budget, just grabbing ten people who have never used your site or application and represent typical users (family, friends, spouses, etc. are a great source of free testers) and watching them trying to perform typical tasks while timing them can help you. But to sit in an ivory tower or a conference room where groupthink kicks in and nodding your heads saying, “Yes, I like this much better” will not help you.

      J.Ja

    • #3206835

      The IT Community can Help the Community

      by justin james ·

      In reply to Critical Thinking

      One of my earliest experiences with IT work was volunteering for a local charity. It was a home for developmentally disabled adults. I assisted with their computer systems, creating accounting and budgeting spreadsheets, building a Web site, and more. It was also one of the most personally satisfying IT experiences I have ever had. In high school, I was extremely active with charity work through the Air Force Junior ROTC program that I was in. Ringing bells for the Salvation Army, visiting local nursing homes, and more were part of that experience. None of these had any kind of payoff for me other than a sense of satisfaction with myself.

      Over the years, I fell out of this practice. Recently however, I have begun doing work like this again (not in an IT capacity, however). I still find it immensely rewarding. I have decided to offer my services as an IT professional to some non-profit organizations as well. IT services are typically quite expensive. If I can help where I can, I want to.

      The Open Source Software folks talk about how they are changing the world by giving away free software. Yes, it is great that businesses can save money and new technologies are being developed. Right now, there are children who have been severely abused in shelters, addicts trying to recover from their disease, prisoners trying to learn a new way of life… the list is endless. And there are non-profit organizations trying to do something about of these problems. It is easy to look at some of the recent scandals and problems with the large charities out there and say, “well that does not look like something I want to be involved in, and they have enough money to hire IT professionals anyways.” That may be true. However, for each huge charity that may put only 10% of its budget to actually helping people, there are a hundred local charities operating on a shoestring budget. These charities are where you can deliver the most help, and they are the ones with the least ability to get IT help.

      One of the great things about volunteer work is that your resume is not nearly as important as your willingness to help. If you are looking to learn a new skill, volunteering is a great way to put it into practice. It can also help build your resume, because you will have the opportunity to try things which you are not qualified to do “on paper.” For example, a local group that I am involved with is beginning a VoIP project. I have no experience with VoIP, but I have a lot of experience in general networking, systems administration, telecommunications, power protection, and other things peripheral to a VoIP project. The person who was asked to do the VoIP project has zero knowledge of any of these things. This organization was delighted when I offered to help out, and now I will have the chance to learn VoIP when I might not otherwise have the chance to do so.

      We all know what kinds of productivity gains IT can deliver to businesses. But non-profit organizations, particularly those that operate on the local level, often cannot afford the capital investment to leverage IT properly. I am not familiar with any services or Web sites that help worthy organizations find volunteer IT help, or help IT pros looking to do some good deeds assist a good cause. If you know of such an organization, please let me know. I am also considering starting a Web site to assist with this. If you are interested in helping, please let me know via TechRepublic private message. I think it is high time we put the spirit of the OSS community to work in helping others with problems more important than the TPS reports.

      J.Ja

      • #3206796

        The IT Community can Help the Community

        by georgeou ·

        In reply to The IT Community can Help the Community

        Now I?m not saying that volunteering is a bad thing or anything, but guys like Steve Wozniak and Bill Gates made a ton of money first and they turned around to give Millions and Billions of dollars away.  Others gave away their software first and ended up with no money to give because they had to work full time for a living.  Both camps obviously had their heart in the right place, but one camp was obviously a lot more effective than the other.

      • #3206640

        The IT Community can Help the Community

        by justin james ·

        In reply to The IT Community can Help the Community

        George –

        I agree with you on that point. If you are looking to help others, you cannot give something you do not have. And if you are looking to do that with money, you cannot give it unless you have it. That’s the beauty of volunteering time. Most people really do have time, even if it is only a few hours per week. Besides, as I mentioned in the post, giving time to volunteer does have a couple kickback benefits. 🙂

        J.Ja

      • #3208196

        The IT Community can Help the Community

        by sean.morley ·

        In reply to The IT Community can Help the Community

        Here here!

        When I look at the amount of great freeware and shareware available, I think “Wow, this is an amazing community!” From Excel add-ons, to antivirus tools, to Linux. Now think about directing some of that incredible talent towards those in need – those without the option to just go buy it. As someone who has worked with handicapped children, and now has one, I can tell you I have seen it from both sides. You CAN make a significant difference.

      • #3206174

        The IT Community can Help the Community

        by xitx ·

        In reply to The IT Community can Help the Community

        I would like to talk about the volunteerring from another viewpoint. I had the experience of doing volunteering in a non-porfit organization. It turned out that because of my dedicated work the organization fired one of their employees who are doing the same work as mine. What’s a pity. That’s definitely not my original intention, but the result is bad.

        What I want to say is before doing volunteering please check if it only benefits the organization while hurts others.

        By the way, you can not do volunteering freely in Canada. I have been a working permit holder in Canada for a while and I tried to volunteer my skills. The funnies thing is that nobody wants to ’employ’ me because I am not eligible to work for other companies from my current employer even though it’s free.

        Dennis

      • #3213038

        The IT Community can Help the Community

        by dan cooperstock ·

        In reply to The IT Community can Help the Community

        One way to help non-profits is with software. You may be interested to let any small to mid-sized charities you are working with know about my free program for tracking donors and donations and issuing charitable receipts. It’s called DONATION, and it is very popular and well liked (over 3,000 registered users). 

        You can find it at http://www.freedonationsoftware.org.&nbsp;

        I am actually just about to start a complete re-write of it, most likely in C#, using CrystalReports for the reporting, and perhaps the new Microsoft SQL Server Everywhere for the database, and then open-source it. (Until now, it has been free of charge but not open source, largely because it’s written in PowerBuilder, and I don’t think there is a large enough PB programmer community to support an open source project such as this.)

        Although I plan to do almost all of the work myself for the first draft of the port, if anyone is an expert on some of these technologies and wants to offer their services as a consultant, that would be most welcome, because .NET programming is fairly new to me. You can contact me at info @ freedonationsoftware.org.

      • #3212824

        The IT Community can Help the Community

        by leigh9 ·

        In reply to The IT Community can Help the Community

        Hi

        This is a brilliant idea, and it is true that the bigger charities are often the least efficient. I provided tech support for a small charity called Asian Aid Organisation for a number of years. I currently work for them, paid 3 days a week, working 5-6. If I could afford it I would work for free, because I believe in what they do. I am developing a new MySQL database and webbrowser interface for them. I am woefully underequipped for this task, and a very ordinary programmer. Many more capable people refused to help saying it couldnt be done. It certainly hasnt been easy and I am currently struggling with a receipting page which is frighteningly complex.

        If there are any vb asp and javascript heroes out there who’d like to help ‘break the back’ of my current issue, I and in fact Asian Aid would be very grateful. Which means that thousands of children and hundreds of women would also be grateful. You can get an idea of who we are and what we do by going to; http://www.asianaid.org.au This is the old static site. The new site is not accessible from outside and there are obvious security issues. I will set up a dummy version at my house if anyone is interested in helping. Also VOIP gurus are welcome too,

        Thanks in advance,

        Leigh

      • #3215246

        The IT Community can Help the Community

        by compootergeek ·

        In reply to The IT Community can Help the Community

        This is one of the nicest gestures I?ve heard in a while.  I don’t know of an organization that helps IT people find ways to volunteer their time.  While every one of us wants to help, we need to keep in mind that any charity, that wants volunteers, NEEDS to obtain help from some one that?s been trained, or has experience, in the volunteer needs they have. Let me provide an example. If a charity needs people to volunteer their time in a book keeping capacity, like maybe someone that can track donations, they shouldn?t be allowing some one that doesn?t have ANY experience or training to perform that task, UNLESS the charity offers the training to the volunteer. I believe some of these organizations do run on a shoe string, or run inefficiently because they?re not taking to the time to utilize all the resources they have to locate the right people for the positions that run the operation of their organization. The operation of a charity should be no different than a business. You?re intentions are EXTREMELY good, however, I would love to hear that you will be seeking out volunteer work that utilizes YOUR expertise for the charity, not build your expertise at their expense.  I encourage you to locate a charity that can use what you already know, I believe they will benefit more from your efforts. Best of wishes to you in finding your way in ?a life of giving?. And thanks for sharing your story here. You?ve inspired me to look into what I can give. :o) Don?t forget to let us know how this turns out for you!

    • #3205988

      Did we miss something by ditching analog?

      by justin james ·

      In reply to Critical Thinking

      I am certainly no hardware expert, but for whatever reason, I think we lost something in the world of computers when we went to a purely binary world of computing. Maybe I have just read Destination: Void by Frank Herbert one too many times. But something inside of me is just nagging that a system of many parallel circuits performing identical functions, but that allows some randomness to occur, could be incredibly useful, particularly in the field of Artificial Intelligence. Any kind of randomness would be fine, whether it be slight delays in processing by some circuits, slightly inaccurate results due to minor differences in physical state, or even just random interference from stray EMF radiation.

      Why do I think this? It is because I believe that making mistakes is a crucial function of the human learning process. Until one makes a mistake, it is impossible between understanding and the repetition of memorized information. Currently, with binary computing, all “intelligence” is simply a form of memorization. The computer is handed a set of initial patterns, and some rules to build new patterns, and it is supposed to “understand” linkages between abstract concepts. But since the programmer is giving the computer the rules to making new patterns (even if those rules are patterns to develop new rules for pattern generation), the computer is ultimately tied to the programmer’s inherent biases and concepts.

      What I envision instead is a tabula rasa of hardware, with some sort of input mechanism giving the computer access to data feeds (camera data, audio, keyboard, or whatever), a simple output mechanism, and a mechanism to display approval or disapproval of the results. It is up to the computer to reprogram itself in a method that avoids disapproval at the very least, and seek approval. In a nutshell, we create “pain” for making a mistake, and “pleasure” for success.

      Why the tabula rasa? Because that is how human minds start. Humans may have an instinct for language like any other animal, but the language itself is undefined. There are language portions of the brain, but no particular language is needed, thus the proliferation of languages all over the world. It is quite interesting to note that certain cultures reflect themselves in language, but even more interesting that language itself affects the way in which people think. I believe that this is because language imprints patterns upon our thought, and our thought tends to follow those patterns.

      Underneath it all should essentially be a massive pattern recognition system that the computer can rewrite as it sees fit, possibly in a method of evolution with some random mutation thrown in for good measure. Hopefully, many of these systems can be networked together in a less than perfect manner, so that they may swap patterns, but the communication system would occasionally alter or even completely corrupt the pattern. It would seem to me that the transmission of mutated ideas could help spur additional evolution of the pattern system.

      I think that a good beginning of this system would lie in evolutionary game theory. Current techniques involve establishing set strategies in advance; this system would develop its own strategies. The evolutionary game itself would provide the reward and punishment system. I think many techniques from functional programming may be used for such a system. Overall, I think it would be an absolutely fascinating project. In my mind, I feel that this type of system could be used for a wide variety of tasks. Often in data, it is the exceptions to the norm (outliers) that are the truly interesting piece of information. If you are a police detective, the lives of the average citizen are not of much interest to you, but the behavior of a small segment is much more important. This kind of system would be excellent at finding the exceptionally interesting data.

      Another great use of this kind of system would be to replace the folksonomy systems that are sprouting up everywhere. Folksonomy systems are a direct result of the fact that quality contextually relevant searching mechanisms are incredibly difficult to program. I have a major problem with the “tagging” trend, which is the same problem I have with Wikipedia. Only the truly enthusiastic bother to do it consistently, and the truly enthusiastic (or worse, fanboys) are not very reliable because of their enthusiasm. Just like it is very difficult to get good information about any OS thanks to the fanboys and zealots, I do not like the idea of using tagging or folksonomy to generate metadata. A system that is capable of generating its own contextually relevant patterns would be excellent at this.

      If you have any ideas on this subject, I would love to hear them. Pattern matching is one of my personal pleasures (I drive my friends nuts with high-speed association games), and AI completely fascinates me.

      J.Ja

      • #3206930

        Did we miss something by ditching analog?

        by mark miller ·

        In reply to Did we miss something by ditching analog?

        I’m inclined to agree with you that AI is easier to achieve using analog technology instead of digital. Some of what you describe about creating learning systems that learn from mistakes has been achieved with neural networks on digital systems. I took an AI/philosophy course in college and we covered the different experiments that have been done. One of them used the rote memorization method, which at that time was regarded as a failure, because it lacked flexibility. The theory was that if you give an AI system enough basic information, it will build up “common sense”, and be able to build more and more of its own inferences from that. One of the examples I remember from it was a lesson in history, “President Lincoln died in 1865,” the machines owner told it, and, “Lincoln is still dead.” This concept is like “Well DUH!” to us, but apparently it hadn’t gotten the idea that if someone dies, anyone, that they’re always dead after that. In fact, there were more examples of this. “President Kennedy died in 1963,” and, “He is still dead.” It was able to draw its own inferences, and ask questions based on what it had been given. Once I saw that it didn’t understand a basic concept like death though, I knew that approach was not going to do the job.

        I remember hearing about an early experiment in neural nets, and it drove home the lesson that you have to be careful what the machine is actually learning. It was being done as a military experiment. A neural net was shown a series of photographs of tanks. A tank in the open. A tank hidden in some brush, etc. I believe they also used some “control” pictures with no tanks in them. It was also an experiment in machine vision. They thought they had trained it, and decided to test it. It failed miserably. They even showed it a picture of a tank out in the open and it didn’t recognize it. The “training pictures” were all of tanks on a cloudy day. The pictures had a darker hue. The test photographs were done on a sunny day. It turned out all the machine had learned was how to distinguish a picture taken on a cloudy day, and ones taken on a sunny day. It may have recognized tanks (I can’t remember), but only the ones in the “cloudy” pictures.

        I can’t remember where I saw this, but it was pretty neat, and it was a few years ago. An inventor had created these bug-like machines that were very simple, and only a few inches long. He used analog technology. All they did was walk on 6 legs, but they could do it over any terrain. He also said that he thought the AI community had made a mistake by pursuing digital technology, because the brain is an analog computer, as are the brains of all animals. He showed how adaptable they were by taking one of the “bugs” and bending its legs in odd shapes. He put the machine back on the ground, which was grass and leaves, and within a minute it had learned to walk again on its misshapen legs. He believed that we needed to take the same approach as evolution: go from simple “organisms” and work our way up to more complex ones, as we learned how to make each class of AI machine work.

        What may be implicit in this approach is doing away with the idea that’s long existed in the AI community that AI is an emergent property. Network enough computers together, throw enough computing power at it, and intelligence will just naturally emerge. He was saying that we humans have to learn what makes nature tick before we build a brain that’s as intelligent as we are.

        And yes, I agree that one of the properties of intelligence that I think we as humans would best recognize is learning as a result of making mistakes. Learning systems that are not allowed to make mistakes are inflexible.

      • #3208596

        Did we miss something by ditching analog?

        by justin james ·

        In reply to Did we miss something by ditching analog?

        Mark –

        Thanks for the great feedback! I saw those bug machines too! I remember it very well, because only a few days earlier, I had seen something from Daniel C. Dennett about how he was looking for something just like this, so I called him on the phone and talked to him a bit about it. After a bit of manual research (why are even within-site search engines so bad? I had to browse past issues to find it), here is the link to the article I had read: http://www.smithsonianmagazine.com/issues/2000/february/robots.php

        That should get you started. 🙂

        J.Ja

      • #3208346

        Did we miss something by ditching analog?

        by fbuchan ·

        In reply to Did we miss something by ditching analog?

        Before pursuing AI, we might hope our species tries to prusue RI (real intelligence), because even if we wanted to go the AI route, some RI behind it might guarantee we produce something that can reason without our mutitude of flaws. Then again, as a stick in the mud, I’m still convinced these mechanical objects are just tools, and not some subtitute for human companionship.

        To be fair I see the value in “reasoning” machines, but “artificial intelligence” is a fantasy, because to achieve it requires we understand “intelligence” itself. We evidently can’t achieve that understanding, as we have no objective measures of it that have reliable metrics in the real world, so how ever would we achieve an artificial representation of something we can’t even collectively define?

      • #3206269

        Did we miss something by ditching analog?

        by mjd420nova ·

        In reply to Did we miss something by ditching analog?

        I can’t imagine a day with out analog capabilites.  I need to have a variable life.  Not just good or bad.  My home exists for analog.  Each room is rigged with infra-red devices that can read analog levels, otherwise the intrusion alarm would go off everytime the cat entered a room.  My wall outlets are setup with analog readouts of current loads, with limits set to prevent overloading. The thermocouples read temperatures, not hot or cold. Without analog I’d be lost. My motion sensors would be going off all the time, I can’t confine the children to one room, you know that’s impossible.  With my smart home, I can come home from work, review the event log and see when the kids came home from school, where they went in the house, even if they went to the refrigerator and opened the door and for how long.  I can see if the turned on the TV before finishing their homework.  I can see every room they went into, what devices they turned on and even any deviations from the normal, such as trying to plug in the hair dryer in their bedrooms.  Anyway, it won’t work in there, the limits are set much too low and will cut the current if the limit is exceded.  The toaster only works in one outlet in the kitchen and the hair dryer in one outlet in the bathroom.  The central air system reads temperatures and humidity in each room and can adjust the heating/cooling to remain within set parameters, same for the humidifier/de-humidifier.  ON and off won’t cut it and would result in wild fluctuations and a utility bill through the roof.  A complete home control system must be analog and the event/result formulas are rather logical but subject to many variables.

      • #3215530

        Did we miss something by ditching analog?

        by antitechnotechnoweenie ·

        In reply to Did we miss something by ditching analog?

        Here is a low-speed association.  You missed something when you admit to tabula rasa (blank slate) as the fundamental a priori state of a human being.  There are two other possible states:  intrinsically good, and intrinsically evil. Pain/pleasure contol loops applied to a sufficently fine mesh are how we train our dogs.  They obey, but they do not distinguish between good and evil.  We do.  We set the standard for the application of pain/plesure. We are the ones who bit the apple.  If you want your machines to so decide for themselves what is good and evil take your lesson from Eve.  Finding themselves naked do not be suprised if they disagree with your relative concepts, pull you down from empirical heaven (decide you are evil), and author their own gods (standards).  History or myth?  Can myth repeat? You will be foreced to impose a some sort of babylonic skism of languages to keep from being overrun.  A tagging system or something.  Strange how we seem to be going in circles at warp speeds.  “Were a goin nowhere mighty fast Captn.”  (Scottie).  It’s a big loop, but its still a loop.  Here we are again at a decision point.  Check your metadata.  You may have a core fault.  The fault may be etched on the chip (tabula) and not an app or OS issue. 

      • #3214306

        Did we miss something by ditching analog?

        by justin james ·

        In reply to Did we miss something by ditching analog?

        AntiTechnoTechnoWeenie –

        Bringing up the idea of “intrinsically good” and “intrinsically evil” as a concept of state, as opposed to properties or attributes, is an extremely risky proposition. First, you presuppose the possibility of “intrinsically good” and “intrinsically evil” states. This rests upon the concept of “good” and “evil”. Assuming that there is such a thing as “good” or “evil”, here is why your logic is not valid:

        1. If a new person is “intrinsically good” or “intrinsically evil” as an actual state of being (or even an attribut of their existence!) then they have no choice in whether or not they are actually “good” or “evil.”
        2. Therefore, in this case, “good” or “evil” behavior is not a matter of free will, but of design (presuming the participation of a divine creator) or accident/evolution (without divine intervention, no to say “there is no God”, but to say, at minimum, “God had nothing to do with this).
        3. It is quite well accepted that “good” or “evil” is a function of free will. A rock can neither be “good” nor “evil.”
        4. By considering “good” or “evil” to be a state of existence and removing free will from the equation, you have removed any possibility of describing it as “good” or “evil.”

        You are trying to show that sometimes humans make decisions that take into account “good” and “evil” at the expense of purely rational decisions. For example, “I want to rob that house and I know I will not get caught, but to do so would be wrong.” However, this is faulty logic, and I will show you why:

        1. “Good” and “evil” have a value in and of themselves.
        2. Purely rational agents take values into account when making decisions.
        3. Therefore, to take the morality of a particular decision into account is to be acting rationally, albeit in a way in which some values are difficult or impossible to quantify.

        J.Ja

    • #3206511

      Geeks and Communications Skills Part II: Getting Phone Support

      by justin james ·

      In reply to Critical Thinking

      In April, I touched on the topic of “Geeks and Communications Skills.” Reading through some of David Berlind‘s (over at ZDNet) recent blogs about poor customer service via call centers, I am following up my original post with this series. This series refers specifically to communicating with call centers, a potentially unpleasant task that most IT professionals need to deal with on a regular basis. This post discusses how to get maximum results from a call into the support hotline, and the next post in the series will show how to provide quality phone support.

      Getting quality support over a phone for a technical matter is not the easiest thing in the world to do. As someone who has been on both sides of the phone call, I know that it can be frustrating both to get help and to be helped. As IT professionals, making contact with phone-based support is part and parcel of our jobs, and it is inevitable. Chances are, when you are calling the support hotline, you need help now, all too frequently with a mission critical problem. While we should not need special skills just to deal with phone support, it is the unfortunate case that without being prepared and knowing how to effectively communicate with call centers, it is hard to get the best support possible.

      Be prepared before calling

      If at all possible, have any relevant logs or error messages handy. Telling the technical support person “it told me something about an interrupted communications with something or other” is not going to help them resolve your problem. Have the exact error message (particularly any error code or status code numbers) will cut the time needed to get help significantly. Also, make sure that you have a paper and pen on hand, to take any pertinent notes. If the problem is with a piece of equipment in another room, have a way of “walking and talking,” and make the call from a cell or cordless phone if possible, in case “hands on” troubleshooting is needed. For pieces of equipment like networking gear or headless servers, make sure that you have a laptop and the correct console cables nearby. If you have to put the support technician on hold for ten minutes to dig for your tools, you are not helping the call go well at all. And of course, always have the serial number and support contract number, or previous ticket number in your hands before dialing the phone; many support technicians must reject your call, even a “quick question” without that information.

      The first few moments

      Make sure to get and record the technician’s name (as well as the spelling) or ID number as soon as they answer the phone. Also try your best to pronounce the technician’s name correctly. Do not be dismayed if the technician requires full details on you and your unit even if it is a brief question, this is the process that the support person is required to follow. If the technician sounds harried or insincere with their greeting, let it slide. Chances are, the technician is harried, and it is difficult to make “good afternoon” sound genuine when it is the 78th time you have said it that day. Keep in mind that you are calling to get technical support, not exchange pleasantries with a complete stranger. When the technician asks “how may I help you?” do not tell a long story. Describe in as few words as possible what the immediate problem is. The technician will determine if they need to hear about the preceding events. Here is an example:

      WRONG

      Technician: How may I help you?

      Customer: We got this unit about seven months ago, I think we bought it from a place we found on the Internet, but it may have been through our local reseller. It has worked great up until a few days ago, but lately it has been a bit flaky. It was making some odd noises like the noise my brakes make when the pad is worn out, and I think I may have smelt something strange in the server room near the unit, but the other technician who works in the server room did eat chili that day, so it might not have been the unit making the smell. Well, today it was down, and when I powered it back on, it sent me an email about having a bad hard drive in it.

      RIGHT

      Technician: How may I help you?

      Customer: The unit was not powered on this morning, and when we powered it up, it sent me an email about having a bad hard drive.

      Technician: Did you hear it making any strange noises?

      Customer: Yes, it make some grinding and clicking noises yesterday.

      Technician: Yes sir, that is definitely a sign of a bad hard drive. I see that you are still under warrantly for parts and I will be delighted to get a replacement drive sent out immediately. Are you able to access the unit’s Web-based administration system so we can find out which drive is bad?

      See the difference? The first conversation fed the technician a lot of information that seemed relevant to the customer, but was not relevant to resolving the problem. The second conversation was short and to the point. By the time the first customer got done telling his, the second customer was nearly finished with getting a replacement part.

      Respect the technician

      When calling into technical support, always be respectful the technician, no matter how frustrated or upset you may be. Remember, the support technician did not cause the problem, create the design flaw, write the manual, write the code, assemble the hardware, perform your configuration, or have any other part in the problem when you call in. They are there to resolve your problem. If you vent your frustrations on the technician, they are going to quickly put you in the “jerk customer” bucket and the quality of your support will likely reflect that. The technician who may have been willing and ready to go the extra mile or provide support above and beyond contractual obligations will barely meet the bare minimum if you are abusive to them. Under no circumstances should you use foul language or raise your voice.

      WRONG

      Technician: Thank you for calling Servers Express, my name is John. May I please have the serial number of the unit?

      Customer: Sure, the serial number of this piece of junk is I-A-M-B-R-O-K-E-N, I mean, 55-66-1212. Listen, this thing is the worst purchasing decision I ever made. Just get me my money back and I won’t have to drive over there and knock some skulls.

      Technician (rolls his eyes, puts the phone on “mute” and curses the day his was born, then take the phone off of “mute”): Sir, I understand that you are frustrated and upset, but I am here to help resolve your problem. However, I will request that you not be abusive towards me. Let’s try to resolve the problem first, it may be something simple that can be immediately resolved. I am here to help you sir, and if a refund is needed, I will be happy to help you with that.

      RIGHT

      Technician: Thank you for calling Servers Express, my name is John. May I please have the serial number of the unit?

      Customer: Sure, the serial number of this unit is 55-66-1212. We have been having a lot of problems with this unit, and I would like to work towards getting a refund.

      Technician: Sir, let’s try to resolve the problem first, it may be something simple that can be immediately resolved. I am here to help you sir, and if a refund is needed, I will be happy to help you with that. Could you please describe exactly what the problem is occurring, so that I may determine if I am authorized to get you an immediate refund?

      If you feel that the technician is not treating you with due amounts of respect, do not yell or curse, request to speak to their supervisor. If the call heads south, do not hang up and call back in hoping to get to a different technician, get the supervisor on the phone. Most call center employees need to work very closely together, and are in close contact with each other to find out what happened on a call or to ask each other for advice. This also means that when you call back in, the next technician you speak to is quite likely to be already aware of the argument that you have had with the previous technician, and is now expecting confrontation. Also remember that the notes in the ticket often contain more than just details of the technical problem. A good support technician immediately documents any kind of problem with the customer and alerts their supervisor (whether or not they are truthful is a separate matter) when a call goes wrong. This means that the ticket notes may have a warning such as “customer was extremely abuse and dropped the call” in them. Finally, keep in mind that you do not have to enter every fight that you are invited to. Again, if the technician is indeed being rude or abusive, you simply need to request to speak to a supervisor. After all, you are a customer.

      It is also important to remember that the technician is doing a repetitive, thankless, and probably underpaid job. That technician spends eight hours a day taking phone calls from people who vent their anger on the technician, who do not have the technical knowledge to be trying to resolve the problem, and has a boss breathing down their neck about “hold time” and “average talk time” and “average number of escalations.” In addition to handling your phone call, they may be simultaneously trying to help their fellow technicians via email or instant message, as well as supporting some customers through email or online chat. Although that is no excuse for the technician to deliver less than excellent service, it is something to keep in mind when talking to the technician.

      Also keep in mind that the technician frequently has a script or process that they must not deviate from. If the technician gets away from that script or process, and there is a problem, they are the ones who get in trouble. Support technicians quickly learn that they keep their jobs by following the script. If the technicians will not budge from the script or process, do not blame them. That does not help you at all. If it is obvious that the technician’s script or process prevents them from helping you the way you need to be helped, request an escalation to a higher level of support or to speak to a manager, depending upon the problem.

      Stay focused

      For a variety of reasons, try to stay focused on the problem at hand. Many outsourced call centers get paid by the ticket. This means that when you roll four different problems into one ticket, they are losing money. In fact, some call centers will request that if you have any separate issues, you must call back in. If this happens, do not get angry. Remember the technician is just following the process and procedures laid out for him by his management. Instead, write a letter or email to customer service complaining about this policy. Also remember, when you pack many issues into one call or ticket, the notes get very confusing very quickly, and follow up calls will take longer than needed since the technician who answers the later call will need to wade through pages of notes about related issues. The support technician’s performance metrics suffer when you put them on hold to answer a call or chat with your boss or otherwise stray from resolving the problem. When the technician is worried about their boss coming over to find out why they have been on the phone for so long, they are not thinking about helping you anymore, they are thinking about ways to get you off the phone.

      Personally, I have been in situations where I was supporting a NAS device over the phone, troubleshooting a frame relay connection on the computer, clearing and investigating SNMP alerts, helping other technicians via IM, emailing customer service to have an RMA filled, and writing documentation regarding tape drives simultaneously, while being the only technician on staff and not having had a bathroom break in nearly three hours. The technician is doing more than talking to you on the phone, so try to be respectful of their time.

      Know the lingo

      Call centers have a unique language all to themselves. Knowing this jargon helps significantly in resolving problems and smoothing the call. It is also extremely helpful to know the “military alphabet” for spelling out words.

      Escalation: Referring the call higher up the chain. A call can be escalated on a technical level (involving a more experienced technician for a hard to solve problem) or the business level (involving a manager or supervisor in order to request something outside of the process or contract).

      T&M/Time and Material: When service outside of the contractual obligations is requested, it is a “T&M Case,” meaning that the customer is responsible for paying any labor or parts charges involved in the resolution. If you are told that a service request is under a T&M basis, find out what the T&M charges will be before authorizing service. T&M charges can be surprisingly high, and there is often a minimum charge as to the number of labor hours, regardless of how long the repair actually takes.

      RMA: Return of Material Authorization, the process (as well as the ID number) of getting a replacement part sent.

      Advance RMA: Most RMA processes will not send a replacement part until they receive the defective part. An advanced RMA allows the company to ship the replacement part before receiving the defective part. This often involves putting a hold on a credit card for the cost of the part for a period of time which will be charged to the card if the replacement part is not received within a certain amount of time. Make sure that you clearly understand the exact terms of an advance RMA before agreeing to it. If you require a replacement part immediately, ask if an advance RMA is possible, even if the technician does not tell you about the option for it. Many companies have an advance RMA option that they do not widely announce.

      Supervisor: Be very careful when requesting to speak to a “manager.” A technician may be truthful in stating that a “manager” is not available, meanwhile a “supervisor” is sitting right next to them. When escalating a call for business reasons or to lodge a complaint, request a “supervisor,” not a manager.

      Technical Lead: A support technician within that level or support with more knowledge and experience than the average technician, but not at the next level of support. If you have requested an escalation for technical reasons and are told that they will call you back or they are unavailable, try requesting to speak to a “technical lead.” They may not be quite as knowledgeable as the next full level of support, but they may know the answer to your problem.

      Level X: Most call centers have three tiers of technicians: Level 1, Level 2, and Level 3. Level 1 technicians typically must stick 100% to a script, and often (but not always) do not know anything outside of their support database of common problems. The Level 2 technicians are much more knowledgeable, and they know things that are not in the Level 1 database. Sometimes Level 2 support is needed for particularly difficult or potentially damaging procedures. A call is escalated to Level 2 either because Level 1 is unable to resolve the problem at all, or because the procedure takes some time to perform and Level 1 needs to be available to answer immediate calls. Level 2 escalations often have a longer hold time than the initial call to Level 1, and frequently Level 2 will return your call instead of having you wait on hold for them. The Level 3 technicians are usually the highest level of support. They nearly always have direct contact with the engineering team or actually are members of the engineering team. If Level 3 cannot resolve your problem, chances are you will be getting a refund or full unit replacement. Because Level 3 problems are often “one off” issues or reveal an entirely new problem that has never been encountered before, it can sometimes take days or weeks to resolve a ticket that has been escalated to Level 3. My experience has been that Level 1 tries to cap a call at 10 ? 15 minutes, Level 2 tries not to hold onto a ticket for more than a few days, and Level 3 can take days or weeks to resolve a problem. Remember, the higher you go, the more difficult the problem is to solve. Some upscale contracts allow you to request immediate escalation to Level 2, or immediately direct your call to Level 2. Depending upon your needs and ability to solve minor problems yourself, this contract option may be perfect for you.

      Technician Dispatch/Technician Roll: Physically sending a technician to resolve a problem.

      Vendor Meet: Some problems (particularly networking issues) require a technician dispatch from more than one company. When this is needed, a “technician dispatch” becomes a “vendor meet” and the different companies arrange the arrival time of their technicians so that they all arrive at the location of the problem simultaneously. A vendor meet typically eliminates any response requirements, because each company has a different SLA with the customer. For example, if Vendor A’s SLA is a 4 hour response time, and Vendor B’s SLA is an 8 hour response time, Vendor A cannot be held to their 4 hour response time if Vendor B cannot arrange the vendor meet within 4 hours. A vendor meet though often carries its own SLA, where a vendor that is late for the vendor meet is penalized.

      Response time: The amount of time that a technician has to arrive on site. Some contracts measure response time from the moment the initial call is placed, other contracts measure response time from the time at which it is determined that onsite service is required.

      X by Y by Z service: This is a shorthand way of describing SLA obligations. “X” is the number of hours that service is provided (24 meaning all day, 8 or 9 meaning “9 to 5” or business hours only), “Y” is the number of days (5 for weekdays only, 7 for all week long), and “Z” is the response time. A “9 by 5 by 4” contract means that the response time is four hours from the time of call in during business hours in the week. A 7 day contract typically requires service on holidays, while a 5 day contract typically does not count holidays as days of possible service.

      Military alphabet/phonetic alphabet: Alpha, Bravo, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, India, Juliet, Kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, X-Ray, Yankee, Zulu

      Being transferred

      All too often, your call needs to be transferred to someone else. A common problem is that a transferred call gets dropped. Especially during a call that has been less than perfect, it is tempting to think that the technician dropped you entirely. This does indeed happen on occasion, but it could be any number of problems. Sometimes the person on the receiving end drops the call when they pick up. A frequent issue in a call center is that a call is in the queue, and a technician goes to hang up a call that they are on. Just as they push the button to drop the call that they are on, the customer drops off and their phone automatically picks up the next call. As a result, instead of dropping the finished call, the drop the new call. Although this can be very frustrating, particularly if there is a long hold time, this can be a less frustrating situation if you get the direct contact information for where you are being transferred to before the transfer attempt is made. Never assume that you were dropped on purpose, no matter how bad the call may have been. For the years that I worked at call centers, I only met a small number of technicians that would deliberately drop calls. Since the phone system would record how the call was ended and by whom, after more than one or two customer complaints, they were terminated. If your call is dropped more than once (especially if it is by the same technician), request to speak with a supervisor immediately; you may have hit upon the bad apple in the bunch.

      Ending the call

      At the end of the call, do not forget to thank your support technician. If you were particularly impressed or disappointed with the level of service you received, request to speak to a supervisor and make your thoughts known, or send an email to customer service. Kudos to technicians is particularly well received, as it is rare that a customer lets the company know of a job well done. Appropriate compliments on service to a supervisor have a funny way of percolating through a call center, helping to ensure top-flight service in the future. If you are given a ticket number, do not forget to write it down, even if the problem is resolved. It is always easier to refer back to a previous ticket for a recurrent problem than to try to describe something that happened weeks or months ago, if the technician needs a full history of the problem.

      Following up

      Some tickets, particularly chronic problems, may require follow up. If the technician said you will get a call back, and you have not received one, do not get mad or panic. They may be under unusually high levels of calls that day. Simply call back and let them know that you were expecting a callback and did not receive it yet. If you repeatedly do not get a call back by the promised time, escalate the call to a supervisor. Repeated failure to call back is unacceptable, regardless of contractual obligations or reasons for the failure. If the technician provided you with an email address to send log files or other troubleshooting information, do not treat this as your direct pipeline to that technician! That technician may no longer be on duty, or may be unable to accept support requests via email. Try to make follow up calls through the same path (online chat, email, phone, etc.) as the initial contact; some companies separate different groups of technicians that may not be as familiar with your problem, or may not even have access to each other’s systems. If a ticket has been referred or escalated to another group, find out how to contact that group directly if possible. Following up with Level 1 for a call that has been escalated to Level 3 or talking to Level 2 to get the status of an RMA just wastes time.

      I hope these tips help you to get the best possible support from a cell center! My next post will provide advice for call center technicians looking to provide the best possible support for customers.

      J.Ja

    • #3212739

      Geeks and Communications Skills Part III: Delivering Great Phone Support

      by justin james ·

      In reply to Critical Thinking

      In April, I touched on the topic of “Geeks and Communications Skills.” Reading through some of David Berlind‘s (over at ZDNet) recent blogs about poor customer service via call centers, I am following up my original post with this series. This series refers specifically to communicating with call centers, a potentially unpleasant task that most IT professionals need to deal with on a regular basis. The previous post discussed how to get maximum results from a call into the support hotline, and this post in the series will show how to provide quality phone support.

      Call centers are an entry point into the IT industry for many people. Although my time at a call center occurred well after I had entered IT, it was still a great experience for me. Some of the most rewarding jobs I have had over the years involved direct customer contact for a full eight hours a day. However, delivering quality customer service, particularly trying to assist a user with technical problems over a phone, is a difficult art to master. These tips should help the call center employer (or anyone trying to assist a customer over the phone) to deliver the best help possible.

      Answer the phone properly

      A proper phone greeting should include a greeting, the name of the company, the precise department that the customer has reached, and the support person’s name. It should be as sincere as possible. Treat every call as the first call of the day, and remember that no matter how difficult the last call was, it has nothing to do with the current call. The standard greeting I have always used was, “Hello, this is XYZ Technical Support. My name is Justin, how may I help you?” Although the script for that company suggested that I ask for the serial number first, I discovered (after much experimentation) that my greeting let the user know that I was there to help them, not to tie them up with process. After the user gave me a brief explanation of their problem, I would then request the serial number. If your company allows you a bit of freedom with your greeting, try this out. Your users will tend to be more pleasant to deal with if you greet them like this.

      Empathize with the user

      When a user calls technical support, they frequent are overcome by a feeling of powerlessness. They may be embarrassed that they cannot solve the problem on their own, or that their job rests upon a successful resolution of the problem. I have literally heard grown men break out into sobbing tears on a support call. To compensate, some users will attempt to direct the call, or refuse to follow directions. Other users try to pump up their egos or impress you with their wisdom. I have had callers repeatedly brag about their Cisco or Microsoft (or whoever) certification; meanwhile their problem is extremely basic. While it may be tempting to ask them “if you are so smart, why are you calling me?” do not do it. You have the information needed and the process established to help the user, and it is up to you to direct the call. You need to quickly establish this with the user by asking pertinent questions up front. Once the user sees that the questions you are asking will solve the problem, they are happy to follow along. It is very rare to have a caller refuse help once you have demonstrated that you can actually help them. But always keep in mind that the user feels helpless. Do not embarrass them or make them feel little, but show in a kind and gentle way that you are there to help, but only if they let you help.

      Respect the customer

      Keep in mind that by the time the user calls you, they have probably been trying to solve the problem themselves. They made be under the gun with a tight deadline or a mission critical problem. As a result, users often sound harried, tense, or may be terse or short with you. They may even take out their frustrations with your company, the product, or your co-workers out on you, and act as if you are the problem. Let it slide. The last thing you need is to be in an argument or abusive to a customer because they are having a bad day. Your job is to put a smile back on their face and resolve their problem, not to dictate to them a lesson in manners.

      Of course, if a customer is extremely rude or abusive, such as using foul language directed at you personally, you have the right to politely request that they calm down. I have found it helpful to remind them that I am not the unit, the company, or the other support technician who was not helpful. One example of an effective phrase to do this is, “Sir, I understand that you are frustrated and angry right now. But I am here to help you, and it is difficult for me to do that if you are abusive to me. If you would like, I can transfer you to my supervisor who may be able to help you.” By doing this, you have put the ball in the customer’s lap. They can either calm down, or they can speak to your supervisor while you help the next customer. But you cannot assist the user if they insist on insulting you.

      Another aspect of respect is to learn the customer’s preferred name, to use it frequently, and to pronounce it correctly. When the customer gives you their name, always refer to them as “Mr. [Last Name]” or “Ms. [Last Name]” unless they request that you use their first name. Never refer to a woman as “Miss” or “Missus” unless you distinctly hear them say that; “Ms.” Is the appropriate prefix for woman of unknown marital status. “Sir” and “Ma’am” also go a long way towards being polite and respectful. If you have accidentally confused a “Mister” with a “Ms.” or vice versa (it happens quite frequently, especially for men with high voice or women with low voices, and a gender-ambiguous first name) and they correct you, simply apologize and make sure that you do not repeat the mistake. Using the customer’s name is important on many levels. It shows that they you are treating them as an individual, not as a nameless, faceless voice emanating from a telephone. It also lets them know that you are paying close attention to them. Be wary though, if you mispronounce the customer’s name, you are letting them know that you are not paying attention. If they have a difficult name, repeat the name and ask them if you have the pronunciation correction (“Mr. Wick-cow-sky, is that correct?”). My experience has been that customers really like this level of personalized treatment.

      No matter what happens, never, ever raise your voice to a customer or use anything less than polite, respectful language. If the customer is in a loud environment such as a server room, there is a way to your raise voice to be heard and still sound pleasant, and a way to sound like a grouch when raising your voice. Learn the former. If the call is getting “too hot for comfort,” it may be time to involve your supervisor, or possibly place the user on hold for a few (very brief) moments to take a deep breath before getting back onto the call. For particularly angry customers, I found it helpful to have my supervisor jack into my phone to monitor the call on the spot, or be by my side, so that they were available if needed. This also allowed my supervisor to directly hear what was going on, so that if a complaint or a problem arose, it would not be my word against the customer’s.

      No matter what the caller sounds like, never, ever mention their accent. I cannot stress this enough. While I do not advocate lying to a customer, “Sir, could you please speak a little more slowly? It is very loud in here at the moment,” will be taken much better than, “Sir, I have a hard time understanding your accent. Could you please speak slower?” Many people may not speak your language natively, or may be from an area with a different dialect of your language. If one is available, ask the user if they would prefer for you to get a translator on the line. Many people who do not speak a language natively are very self conscious about their accent. To bring it up is impolite at best, and quite offensive at worst. Just about the worst mistake  ever made regarding accents was offering to get a translator on the phone for someone from Scotland. I simply could not understand the Scottish accent at all, and I thought that maybe Scots had a regional language (like Wales). Boy was I wrong! The only thing that saved me on that call was that it was an internal call, and it was still rather difficult to smooth it over after that.

      Learn the customer’s lingo and use it

      Each call center has its own subsection of the English language, a combination of product-specific words and phrases as well as regional sayings. However, your customers may be from all over the country or even from all over the world. They are not steeped in the culture of your company either. So if the user calls the device in question a “doodad” and your literature calls it a “widget,” just call it a “doodad.” Similarly, if your user says something along the lines of “power cycle,” say “power cycle” not “reboot.” Adapting to the user’s language allows you to communicate more effectively. If you work at a call center long enough, you pick up bits and pieces of different regional phrases. Feel free to use these, as long as you do not sound like a pretender or like you are mocking the user. I worked in one call center where we dealt with a large number of people in Australia. One of my co-workers would do his best version of “G’Day mate!” in an exaggerated Queensland accent (think Paul Hogan). Not only did he sound ridiculous to begin with, but the callers were typically in New South Wales and had an entirely different accent. Most of them did not find it particularly funny; they thought he was making fun of them.

      Pick up on the user’s mood and match it

      If your customer is laughing and joking, it is OK to be a bit lighthearted. Do not take it too far of course, of make any type of even potentially offensive jokes, of course. On the other hand, if the user is being a bit down, or is upset, or is all business, then you should be 100% professional. Someone with a lot of pressure on them to solve a problem will treat your lightheartedness as you not taking the problem seriously. Even if the call is in a moment of “down time” like during a unit reboot, filling the silence with small talk is not appreciated under those circumstances. The happy shiny customer, on the other hand, will most likely be happy to exchange pleasantries like “how is the weather?” or “I have never been there, what are some good places to visit if I ever get out there?” But remember to always be professional.

      Be sincere, honest, and upbeat

      Whenever you open your mouth, make sure that your heart agrees with the words you are saying. “I am having a great day!” is an obvious lie when you grumble it, and the customer does not appreciate that. If you are having a lousy day, there are ways to be honest yet professional. “Today has had a lot of challenges, but I learned some great things,” is a lot better than, “I am looking forwards to the end of the shift.” On that note, never mention how long you have been on duty or if you are near the end of your shift. The last thing you ever want is to give the caller the impression that you are looking to get them off of the phone so that you can go home or take a break. The user is calling in to get help with their problems, not to hear about yours. They do not want to be talking to someone with a bad attitude either.

      Always tell the truth to your customers. They have an uncanny ability to spot a lie, even over a phone. If you make a mistake, immediately correct it as soon as you know better, or admit to it if the customer catches it first. Along those lines, if a co-worker has given incorrect information to a customer (for whatever reason), be careful with how you correct the problem. “Well, I am sorry sir, but John is new and really is clueless,” is not a very good way to handle this. “John may have made a mistake or miscommunicated what he really meant sir, I will let him know and make sure that he gets the correct information,” is a much better response.

      “This call may be monitored or recorded for quality and training purposes.”

      Unless you are working for a very small company, chances are that there is always the possibility that any given call may be monitored in real time or recorded. Treat every call as if your supervisor is standing next to you. That means always be polite, do not lie to your customer, and delivery the best support possible, on every single call.

      Make sure that your ticket notes are accurate and appropriately worded

      If the customer calls back, you want to make sure that your co-workers are able to quickly read the notes in the ticket and understand exactly what happened. If a questionable situation arose, make sure that you document why you made the decision that you did. “As per paragraph 1.X.a of the system manual” or “as per team lead Joe” are wonderful ways of covering yourself if a problem comes up later. It also lets the next technician know where you got the information from. On the other hand, the ticket notes should not be too verbose. When someone has to read four paragraphs of “he said, she said” just to find out that the network connection is down and a technician is being dispatched, you have been too wordy. When writing your ticket notes, ask yourself, “if this user called back, what information would I need to know to pick up where we left off?” The user who calls back will appreciate that you co-workers will be able to quickly and accurately understand the exact situation, instead of being put on hold forever.

      Always give your users a wide range of options, and explain them clearly and simply

      All too often, a major source of customer complaints is that if they had known about an option, they would have taken it. A common example is the advance RMA. Most users do not know about it, and accept the basic RMA policy. If they later find out that there was an advance RMA option, they may get upset because they would have preferred to use it. Always give your users the wide spectrum of options, but be sure to explain what they are as well. For example, “we also offer an advance RMA” is not nearly as helpful as, “if you require the replacement part sooner, we have an option for that as well.” By giving your users options, they feel as if they are in control. When the user feels as if they were a crucial part of the success of the call, they are happy with the call.

      Ending the call

      Always try to end the call on a positive note, even if their problem cannot be solved. Remind the customer of your name, and thank them for calling. Like the greeting, it can be difficult to sound sincere when thanking you for calling in, but also remember that callers can detect insincerity quite easily. Even if the call was “challenging” (a euphemism for, “the caller insulted my religion and my mother, told me I was born out of wedlock, and ended up taking a blowtorch to the unit”), be polite until the very end. Especially when the user is upset or the problem could not be solved, any sounds or hints of displeasure on your end (even a loud sigh) will be interpreted as displeasure with them and they will react as if you are angry with them. Your job is to solve problems, not create them. Any negativity on your part will ruin the call. At the end of the call, always ask the user if they have any additional question or concerns, or if there is anything else that you may help them with. Never give the user the impression that you are trying to push them off the phone.

      I hope that these guidelines help you deliver great customer service to your customers!

      J.Ja

    • #3213622

      Microsoft’s Lumi?re Project and User Anticipation

      by justin james ·

      In reply to Critical Thinking

      I just finished reading a very interesting article (“The Lumi?re Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users“) from Microsoft Research. Lumi?re was the name of the project that developed mathematical models of assessing the needs and goals of software users in real time in order to present users with accurate, relevant assistance as they worked. Does this sound like something that you think users would be interested in? I sure do!

      However, after reading the Lumi?re paper, I may be changing my mind a bit. To understand why, we need to take a brief trip through time.

      In 1993, the Lumi?re project began, and was initially shown to another team at Microsoft in 1994 (to tell you what team ruins the surprise!). That other team liked what they saw, and built a special version of one of their products that allowed the Lumi?re group to hook into it and begin showing how the Lumi?re techniques could improve the product.

      During this period of time, teams of experts (primarily psychologists, from what I can tell) were set up to observe users in a sort of “Chinese Room” type experiment. Users were told that they were using an experimental system to help them work. The experts observed the users, but were not told what the users were trying to do. The experts tried to identify the tasks that the users were accomplishing, and then provide the users with relevant assistance. The way the experiment was set up, users were not sure if it was a computer or humans presenting the help, because they were isolated from the experts. This was done in order to understand what types of tasks were easily identified and for which of those tasks assistance would be helpful to the user.

      Lumi?re users Bayesian networks of probabilities (commonly used in spam filters) to identify what the user is trying to do and what they need help with based upon their past tasks, takes into account the users’ demonstrated abilities, levels of expertise, and historical “pain points.” After determining the users’ needs and goals (using a set of probabilities that age and become less relevant over time) through a variety of events (and non-events, such as idling the mouse over a menu option for a period of time), Lumi?re makes suggestions based upon what the user is most likely stuck on, or helps give them a shortcut to accomplish their goals faster, even if they already know what they are doing.

      Does this sound familiar yet? It should. Let us finish the history of Lumi?re though?

      The application development team was so impressed with the results of the Lumi?re/application integration, that Lumi?re was adapted to that particular application (and related applications) in a less advanced form. I am not sure if the full Lumi?re project was stripped down to meet shipping deadlines, reduce the system resource needs, or meet the needs of a broad user base, but much of its advanced functionality (including tracking and profiling users to determine individual skill levels) did not make it to the final product. Lumi?re powered what was probably the most hated feature in software history.

      The application? Microsoft Office 97. Lumi?re’s contribution? Clippy.

      Maybe if Lumi?re was shipped in its entirety, and users knew that it could be “trained” they would have been willing to put up with it long enough to have it actually be helpful. I do not know, and trying to re-predict the past based on “what-if scenarios” and “just-so stories” is not very helpful. What is known is that Clippy was so universally hated that it is probably the only feature (definitely the only high-profile feature) that Microsoft has ever removed from a product that I can recall. In fact, when Clippy was left out of Office XP (aka Office 2002), it was actually a selling point in advertisements!

      Before I read the Lumi?re paper, I had assumed that Clippy was some second-rate hack, or maybe not much work had gone into it. Lumi?re had been worked on for four years by a wide variety of psychologists, usability experts, statisticians, and a ton of other highly qualified and experienced experts. It was developed by the best of the best. From what I can tell, users in the Lumi?re testing group must have been very impressed by it for it to end up in the Microsoft Office Suite. But real-world users despised it. It was annoying. It impeded work flow. It was grossly inaccurate.

      Personally, I wanted to love Clippy. I tried a number of times to leave him on. I always felt sad, after the first use of an Office application after installation, to see him blink his eyes and commit feature suicide when I turned him off. Then again, I am also a touch attached by the little puppy in Windows file search. I tried letting Clippy help me do my job. Clippy always made me miserable, and I think he made 99% of users miserable.

      And this is why I am reconsidering my stance towards applications that anticipate users’ needs. If all of the effort that Microsoft put into Lumi?re could not produce a system that was usable and workable in the real world, it is pretty hard to imagine being able to do it well at all. Granted, Microsoft Office is an incredibly complex suite of applications that get used for a wide variety of tasks, many of which are outside their intended purposes (Excel as a database system, Access for client/server applications, etc.). But I can imagine the difficulty and frustration of using a system which attempts to adjust itself to the users’ needs on the fly. If it is less than 100% accurate, users will get mad and frustrated, and their lives will become painful, instead of the anticipation system making them more productive. That is the real lesson to be learned from Lumi?re and Clippy. User anticipation is a great idea, but only if it is perfect in its execution.

      J.Ja

      • #3213772

        Microsoft’s Lumi?re Project and User Anticipation

        by mark miller ·

        In reply to Microsoft’s Lumi?re Project and User Anticipation

        It was interesting reading about the history of Lumiere. It sounds like all they did was turn the project into a glorified version of their “Quick Tips” feature which they used to include in much of their user applications. Every time you’d start the application up, you’d be greeted with “Did you know…” followed by an informative message about a feature you may or may not have used yet. I always turned that off. It almost seems like putting “Clippy” in was a marketing decision. If I were an engineer on the project I think I would’ve recognized that the most important part of the technology was its ability to learn what the user might need help with. The thing with Baysian is I imagine each user would have to train it. That might be something that people who have work to do wouldn’t want to put up with in the first place. I might have the patience for that, but I don’t know if my mom (a tech neophyte) would.

        Even when people need help it can take intuition to actually help them, something computers haven’t come close to yet. Don’t take this as sexist, but I’ve been around a woman who’s overwhelmed by circumstances, life, etc., and if I took her complaints and cries for help literally, I would try to assist as she asked, but end up doing something she didn’t want, making her more frustrated. What I have to do in these situations, though it’s a challenge, is kind of ignore what she’s saying, look at the situation, look at what she’s doing, what might actually be the cause of her frustration, put myself in her shoes and see what I’d need assistance with in the situation, and then to try to step in and help, and hope that I’m being helpful (no guarantee there either). It may be something she didn’t specifically ask me to do at all. People who are comfortable with computers will understand the linear way a computer goes about things, and they might have more patience for the kind of help that a Baysian system might try to offer, even if it’s a bit off. There are some people who are uncomfortable using computers to begin with. I imagine they would have little patience for a help system that makes mistakes in perceiving the need for its help. They’ll also tend to do things that are unconventional. People who are comfortable with computers understand that an application only understands what’s going on because it forces its user to conform to certain rules in interacting with it. I’ve seen people try to get around those rules and do things their own way. They get the job done, but the help system could easily get confused about what they were trying to accomplish, because it would assume that if you’re trying to accomplish a task, you’re going to conform to its rules for doing it. It would be a real challenge for it to look at the form of what’s been accomplished so far and figure out, “Oh, I see. You’re trying to do X.”

        Personally, one of the things that’s often annoyed me is the kind of autohelp or autoformatting features that Word turns on by default. Things like if I start a numbered list, it automatically starts using its default formatting for numbered lists. Sometimes this is fine with me, but other times it just messes up what I was trying to accomplish. I try to turn as many of those things off. You would think that auto-capitalization or auto-spell-correct would be helpful, but I’ve run into many situations where that has messed up what I was trying to do. I think one feature they have, which I’m fine with, is taking situations it would normally auto-correct and just flag them as potential problems in the document with little squiggly lines of various colors. It’s unobtrusive and allows me to continue, with a little “string tied around my finger”, to remind me that this may need to be addressed later. I’m fine with that. It gives me the option to have it correct them, or not.

      • #3276819

        Microsoft’s Lumi?re Project and User Anticipation

        by snoopdoug ·

        In reply to Microsoft’s Lumi?re Project and User Anticipation

        I’m sure every techie has anecdotal experience with folks who need help. My lovely wife is absolutely not interested in understanding how or why something should be done one way versus another, she just wants a straightforward way of doing something, such as adding a checkbox to a Microsoft Word page. I’ll bet we could come up with at least a half-dozen valid ways of doing this. I ended up creating a button, in a toolbar, which was given her name so she would have a chance to remember.
        I think the computer industry has given computers a bad rap by implying that they are easy to use. I would liken them more to an idiot savant. Extremely fast at doing EXACTLY what you tell them to do–nothing more, nothing less. I would also point out to users that as they get more capabilities, they get more complexity. The analogy is that if you want 125 horsepower, you get an easy to work on slant-6 engine; if you want 500 horsepower, you get a complex v-10.

        Cheers,

        doug in Seattle

      • #3276778

        Microsoft’s Lumi?re Project and User Anticipation

        by grantwparks ·

        In reply to Microsoft’s Lumi?re Project and User Anticipation

        “If all of the effort that Microsoft put into
        Lumi?re could not produce a system that was usable and workable in the
        real world, it is pretty hard to imagine being able to do it well at all”

        So, to translate, “if MS couldn’t do it, it probably can’t be done”.
        Hahahahahahahahaha!

        Their people are no smarter than any other organization’s.  It’s a reasonably widely held opinion that MS doesn’t sell a single best-of-class product.  In some cases they’ve turned out products that are a horror to develop with.

      • #3276684

        Microsoft’s Lumi?re Project and User Anticipation

        by mx6ls ·

        In reply to Microsoft’s Lumi?re Project and User Anticipation

        I agree with  grantwparks.

        There are smart people outside Microsoft who can things much better than Microsoft. What Microsoft has is research $$ 

      • #3276670

        Microsoft’s Lumi?re Project and User Anticipation

        by justin james ·

        In reply to Microsoft’s Lumi?re Project and User Anticipation

        In response to the idea that “if Microsoft cannot do it, no one can…”

        “Their people are no smarter than any other organization’s.  It’s a reasonably widely held opinion that MS doesn’t sell a single best-of-class product.  In some cases they’ve turned out products that are a horror to develop with.”

        and

        “There are smart people outside Microsoft who can things much better than Microsoft. What Microsoft has is research $$”

        I agree, Microsoft does not always have the smartest people and their products are very rarely the best in their class. However, as the second comment does make clear, Microsoft has a lot of resources that they can put towards research. Another thing that Microsoft has is the largest market share out there in many classes, giving them unprecendented access to customer feedback. Their partnership programs give them deep level access to customers (and vice versa) that other vendors only dream of. Microsoft Research division does a ton of research into things that are only peripherally related to their products, which yields a rich harvest in spin off technologies and ideas, some of which can get folded back into their core products.

        I definitely think, based on the feedback, that the sentence I wrote in question was a bit poorly phrased. The idea that I was trying to convey was that if Microsoft, with 4 years of research and development, with all of their money and resources, could not even get the idea to add even the slightest amount of value to the end user, despite the quality of the research portion of it, the idea of this type of functionality actually being useful to users is doubtful.

        Is Microsoft perfect? Not by a long shot. As mentioned before, their products are rarely best-in-class, and often do not even make it to honorable mention until their third full version. But Microsoft does get that honorable mention nearly every single time. Granted, Clippy was the first real attempt (and the last real attempt) at user anticipation that made it to a mainstream product. But seeing as no other major vendor has even attempted something like this that I am aware of, plus the complete failure of Clippy (let’s be real, Clippy was 100% worthless), it is difficult to see that this type of fuctionality is useful at all. Even when Clippy did do the right things (which is surprisingly often, from my brief experience) it was still obnoxious and victim to the disable switch.

        I also have to ask, “outside of Microsoft, Apple, and maybe Amazon, who else puts this much work into this type of code?” Certainly not Sun or Oracle; their products are nearly impossible for all but the most experienced users to use, and sometimes even install. The Linux’s and BSD’s seem to ignore usability (let alone advanced usability) completed. It is not just that Microsoft failed completely, it’s that very few people even bother with this kind of work.

        So although I love the idea of anticipatory software, seeing how much work got put into Clippy and comparing it with the actual experiences of end users makes me doubt its actual utility.

        Thanks for all of the great comments, and keep ’em coming!

        J.Ja

    • #3209466

      Microsoft .Net Needs Better *Nix Support

      by justin james ·

      In reply to Critical Thinking

      Longtime readers of mine will know that I have a lot of love and respect for some of the less popular languages and programming models like Perl and functional programming (in the order of “love” and “respect”). With that in mind, I am strongly encouraging Microsoft to put as many resources as possible into getting the .Net platform ported to *Nix immediately. Why? Because .Net is quickly emerging as a powerhouse for less mainstream languages! With the impending release of IronPython and the recent release of F#, .Net is establishing major credibility in areas traditionally dominated by *Nix.

      Many of the lesser known languages suffered from a lack of proper tools such as IDEs and debuggers. Debugging Perl, for example, is very similar to the way I debugged BASIC in 1991: with statements to dump variable and program state to the screen or a log file. All too many languages are in this boat. Visual Studio, on the other hand, is a fantastic programming environment.

      In addition to a lack of tools, many of these less used languages suffer from a shortage of quality libraries. For too long, Visual Basic, C#, and Java were easier and quicker to write code in, not based on any particular strength of the language itself, but on the huge base of quality libraries available. Indeed, the .Net Framework is an incredibly comprehensive library, encompassing most of the Windows API in a rational, consistent fashion. Meanwhile, other languages lag behind with the libraries.

      As more and more languages move to the .Net Framework, it is increasingly possible and important to be able to write code in more than one language. A function that may be 100 lines of code in VB.Net may only be 10 lines in Perl. The 4,000 line monstrosity in C# can become 100 lines in a functional language such as O’Caml. With IronPython and F# (an O’Caml derivative) now in a usable state on the .Net Framework, the game is changing. Now, you can use VB.Net to handle the interface, catch events, and handle database operations which it is great for, pass data back and forth to C# code to deal with the multithreading aspects of the code, and have F# perform some intense analysis of the data. All at nearly the same speed of natively compiled code. Indeed, I have read of F# spanking C# in terms of speed on some projects!

      And the only conclusion I can come to is that Microsoft needs to get their act together and get the .Net Framework ported to *Nix as soon as possible. If possible, they should put a GUI abstraction layer in it, to interface with X with no changes to code needed. At the very least, as long as one is not using concepts specific to Windows and its associated technologies (Active Directory, NTFS file permissions, some aspects of networking, and so on), software should run on a *Nix platform with no changes needed. For example, if a piece of code receives a request from a Web server, connects to a database, works some magic, and outputs HTML, that should work flawlessly.

      I recognize that the Mono project exists, but it is not perfect, from a number of standpoints. The most obvious one is that Microsoft has never been great about sharing the nitty-gritty details of their technology with others, particularly when it gives them an edge. Another problem is that Mono is steered by Novell, who is on less than friendly terms with Microsoft. At the very least, the Mono community needs to be working closely with Microsoft to make sure that they get things right. From what I have read and heard, porting applications from the Microsoft .Net Framework to Mono can involve a good deal of work. I believe that unless someone is using Windows-specific code, the transition should be minimal.

      At this point in the game, the only nail .Net is currently unable to put into Java’s coffin (as well as become the de facto platform for many languages) is cross-platform support. If Microsoft can get .Net running on Linux, it should be fairly easy to then get it to Solaris and the BSDs, and then finally to MacOSX. Every article I read about a new language being ported to the .Net Framework, particularly dynamic languages, say the same thing: the .Net CLR is an excellent system to be basing languages on. People set out to show the shortcomings of the .Net CLR, and end up becoming its biggest supporters. Ruby is quickly moving to .Net. IronPython is nearly out. F# is already released. ActiveState’s Perl is now able to work with the .Net Framework, although not as seamlessly as I would like. Within a year or so, every major language, most of the mid-level languages (Perl, Ruby, Python, etc.), and many of the minor languages (O’Caml, COBOL, and so on) will be fully ported to the .Net Framework. That means that languages that used to be relegated to special-purpose uses are suddenly viable options for mainstream developers. It also means that it is now possible to perform much special-purpose programming on Windows, more easily than ever, with those special-purpose languages. This brings Windows well into the realm of science, statistics, GIS, and other niche markets.

      And the only thing missing is top-shelf *Nix support. Microsoft, are you listening?

      J.Ja

      • #3212574

        Microsoft .Net Needs Better *Nix Support

        by forozco ·

        In reply to Microsoft .Net Needs Better *Nix Support

        Microsoft wants everyones use its OS, not Unix. And .Net is that edge to move you to Windows. I do not see how can you think that Microsoft gains something supporting the old OS that they want to get ride off.

      • #3212505

        Microsoft .Net Needs Better *Nix Support

        by justin james ·

        In reply to Microsoft .Net Needs Better *Nix Support

        forozco –

        Sadly, you are right. In the case of expanding .Net to better support *Nix, while it may be great for the developers (with a corresponding trickle down to users), it does not fit in with Microsoft’s game plan. It is telling that when Microsoft does try to “play nice with the other kindergardeners,” it is always in a very self-serving way. Microsoft will not port anything to *Nix, but they do thinks like release the “UNIX Services for Windows” (to make Windows POSIX compliant) to make it easy to port *Nix software to Windows.

        That being said, Microsoft is very good at serving the needs and wants of developers when the business people let the technical people do so. Hopefully this will be (eventually) a case where Microsoft will realizing that they can make money regardless of which platform they are servicing.

        J.Ja

      • #3202149

        Microsoft .Net Needs Better *Nix Support

        by mark ergot ·

        In reply to Microsoft .Net Needs Better *Nix Support

        I would agree, a decent CLI would be a nice addition to WinBlows. However, mucking up the nice work M$ has put in to .NET 2.0 is not going to help anyone. Maybe the focus should be on the CLR as an extension of M$ to other systems. If you want to run a windows application, simply download the CLR and whala… A minature blue screen on your box that doesn’t crash the whole system. 

      • #3202144

        Microsoft .Net Needs Better *Nix Support

        by mark ergot ·

        In reply to Microsoft .Net Needs Better *Nix Support

        I would agree, a decent CLI would be a nice addition to WinBlows.
        However, mucking up the nice work M$ has put into .NET 2.0 is not
        going to help anyone. Maybe the focus should be on the CLR as an
        extension of M$ to other systems. In case you needed to run a M$ application on whatever stable system, simply download and install the CLR and viola… A minature blue
        screen on your box that doesn’t crash the whole system. 

    • #3212445

      The End has been Nigh For Some Time

      by justin james ·

      In reply to Critical Thinking

      Justin Rattner over at ZDNet recently posted a blog about the end of applications. Mr. Rattner is right, yet incredibly wrong. He is absolutely right in that “all of the interesting applications have been written.” In fact, I am on the record as stating that it is virtually impossible to create a single new application that could not be done (however inefficiently) with other means and tools.

      That being said, Mr. Rattner’s analysis of where the future lies is misguided at best. He believes that the future lies in changes to reduced operating costs and increasing manageability. I am all for reducing operating costs and increased manageability. But usability is the most important missing piece. Computers are completely unusable. Even cell phones are unusable. Most cell phone consumers pay big bucks for phone where the only features they use are the same features they were using 10 years ago, making phone calls. Why? Because the interface and OS for cell phones is atrocious.

      Mr. Rattner’s viewpoint is that of a hardware person. This is understandable; he is the CTO of Intel (amongst other technical titles). Hardware people have never understood end users terribly well. Not that application developers do such a great job of it either, but at least they make the product that the customer is actually using. Even worse, Intel specifically understands neither application development nor end users. They make general purpose CPUs designed to disappear in the user’s life. Intel has never once done anything to help me as a developer. When Intel started pushing HyperThreading and dual core CPUs, did they put anything out to help tech the average programmer how to write programs to perform well on single core and dual core CPUs? No. Did they put any effort into helping Open Source Software projects optimize their performance to take advantage of the new hardware? No. Has Intel ever put any effort at all into writing code that end users see? Nothing except for a few drivers and system tray applications for their NICs and video chipsets. So why would I think that anyone at Intel has any understanding of the developers, the development process, or what end users want?

      If you are an IT manager, would you prefer to go through yet another round of hardware purchasing to consolidate a few servers and reduce the power bill… or would you prefer to initiate a usability improvement session on your existing software? Most IT managers will choose to buy the hardware. I do not care how much hardware or how powerful it is. If the users cannot use it, it is wasted money. It’s like owning a Corvette but driving the Long Island Expressway to work, it is wasted power. On the other hand, increased usability, by making interface (and some underlying logic changes) yields rich results with the hardware and applications we have today. For example, a website that performs a round of usability improvements can expect to see sales increase by a dramatic amount .

      We have been playing Intel’s game for the last 20 years, and it has not helped the end users. IT does not exist for the sake of enriching the wallets of hardware vendors, application developers, or consultants. Let me rephrase that. IT should not exist for those reasons. But it does. As a result, users hate their computers. Companies spend a fortune on IT, buying hardware to compensate for sloppy programmers and software that is “the version you have been waiting for your whole life” that still does not address the underlying flaws in all of the previous versions, but has a few new featured buried in a menu someone, and it now lets you skin the interface.

      I have been a computer user for the last 20 years. I have owned a personal computer for at least 15 of those years. Yet, at the end of the day, my usage habits have been frozen since about 1994. Here is what I use a computer for (in no particular order):
      * Checking email
      * Chat/instant messaging
      * Consuming content, primarily text, but sometimes audio/visual
      * Creating content (95% text, 5% visual)
      * Programming
      * Playing video games
      * SSH’ing to *Nix servers

      I was able to do all of this, in one form or another, on a 486. I was able to do most of it on a 286. Indeed, my 286 with that 2400 baud modem was actually able to feed me remote text (my primary target of consumption) about as fast as my current PC does over a broadband connection. Why? Because it was not dealing with a Web server or having to parse HTML, and then fire up a JavaScript interpreter just to make a menu pop up!

      And sad to say, the actual usability of the computer has gotten worse, not better.

      I postulate that while achieving a minimum level of proficiency with a computer is now much easier, the difficulty and time needed to become a power user has increased substantially. For example, learning the basics of WordPerfect 5.1 was a bear. But once you had that little template laid over your F keys, you could do anything you wanted to a document with a few short keystrokes, without ever taking your hand off of the keyboard. Compare this to Word 2003. Word does not have many word processing features that WP5.1 did not have. Its extra features are on the periphery, in the form of things like Smart Tags, the “Research” system, and so on. Yet, doing the same tasks that I did in WP5.1 in fractions of a second now take many seconds! Take my hands off the keyboard, reach for the mouse, highlight the text, go to the menu (because there are more items than keyboard combinations), scan through the endless cascading menus looking for the tasks I want to perform (instead of glancing at the template laid over the F keys), sometimes taking a minute, then walking through some dialog or wizard. All to create the exact same table which took under 10 keystrokes and five seconds in WP5.1.

      If anyone considers this “progress” they are mistaken.

      Mr. Rattner is correct in that the vast majority of “improvements” really fall under the eye candy category. RSS is eye candy. HTML is eye candy. AJAX and “Web 2.0” are irrelevant eye candy. Granted, my definition of “eye candy” is rather broad; much of what these technologies deliver is added convenience and even the occasional efficiency to the end user. But the value of the content or the application itself is not increased by much at all. None of these technologies truly improve the human/computer interaction. There is nothing that these technologies deliver that could not be done 20 years ago. But the hardware vendors love these technologies. Want to sell faster CPUs? Get programmers to ditch flat files for tabular data in favor of XML. Want to sell memory? Get the users onto AJAX’ed systems (“JavaScript is the COBOL of the Web” to quote Roger Ramjet from ZDNet TalkBacks) and off of natively compiled code. Pushing horribly cluttered interfaces is a great way to get people to buy gigantic monitors and the video cards that go along with those higher resolutions. Convincing people that it somehow makes more sense to watch a movie on a 19″ monitor instead of their 35″ TV is another smart move.

      So while I think that Mr. Rattner’s starting premise is right, I think that his analysis of the situation in general is not correct.

      J.Ja

      • #3277312

        The End has been Nigh For Some Time

        by just_chilin ·

        In reply to The End has been Nigh For Some Time

        I couldn’t agree more.

        No matter how much memory or CPU speed your computer has, Microsoft will always find a way to slow down your PC.

         

      • #3277291

        The End has been Nigh For Some Time

        by justin james ·

        In reply to The End has been Nigh For Some Time

        “I couldn’t agree more.

        No matter how much memory or CPU speed your computer has, Microsoft will always find a way to slow down your PC.”

        I really wish it was just Microsoft. Everyone seems to be guilty of it. Look at the Firefox project. It started out as a replacement for Mozilla, because Mozilla was increadbly heavy, loaded up with all of the sludge from Netscape. So they stripped it to the bones and removed everything from the suite except for the brower itself. Now Firefox is just as heavy as the Mozilla browser was (if not more so) and getting filled with security bugs thanks to the new features. It is pretty sad, because the original idea was perfect.

        A few pieces of Open Source Software, freeware, and shareware avoid this. I have a piece of software or two that has not added a new feature in 2, 3 years, simply because the product is just right as is, and they release a minor bug fix once in a while. That is software perfection, IMHO.

        Unfortunately, any company who’s financial success rests on the gravy train of upgrades (MacOSX is the worst offender out there) is forced to take great software and load it up with new features every year.

        J.Ja

      • #3277254

        The End has been Nigh For Some Time

        by debuggist ·

        In reply to The End has been Nigh For Some Time

        The reason why usability is so bad is partially due to users.

        How many of us have tools that we love and use repeatedly? Consider all the tools available in Linux or UNIX. We use those tools, because they are simple and single-purpose. But we can also chain them together to create our own mini-app that meets our specific purpose.

        Most users don’t want to do that; that’s too much thinking. They want to push a button that gives them the results they want. So application vendors clutter an application with multiple purposes to appeal to a broad range of users.

        Some applications are implementing the chaining together of simple, single-purpose apps into a larger app with a plug-in architecture (e.g., IDEA and Eclipse). I love that approach, and I wish more application vendors would use. It requires more thinking on the user’s part, but the user gets a best-of-breed app for each purpose instead of one app that does some things well but others not so well.

        Are there word-processing or spreadsheet apps that use a plug-in approach? I would consider using them, and I bet other users would if they realized the usability improvement offsets the time to find any plug-ins they need.

      • #3277147

        The End has been Nigh For Some Time

        by just_chilin ·

        In reply to The End has been Nigh For Some Time

        I think eclipse idea has been one the best. But what happens when an application overgrows? (too many plugins) … How do you control that … (Load On Demand, etc.)
        It seems to me Firefox tries to load every plugin when it starts. If someone can please explain to me why Firefox is using up all my memory (sometime with just 2 tabs open).

        Listerning to a hardware vendor predict the future of software is like taking javascript courses to better understand java (even better … just saw this today).. is like saying car and a carpet are the same things.

      • #3199811

        The End has been Nigh For Some Time

        by pkr9 ·

        In reply to The End has been Nigh For Some Time

        Really it’s quite simple. Treat IT as any other kind of business machinery. In the machining dept. they don’t by a new lathe just because the supplier made a new model. They wouldn’t even dream of buying one that only runs on 867volts and 27 Hz, with everybody else using 220V 60Hz, or 110V 55Hz.
        They find out first if the one they have does it’s job. If not so, the production manager scrutinize the market, and do some calculations with the help of finance and maybe markerting, so see if the replacement will make the company more productive or add to the bottomline. It the result is negative they keep what they have.

        This is what we do in IT, and how we changed from anything to Windows and ethernet – no?

        Then why is it different? Because the suppliers, have told us that we need the latest scream to do what we do perfectly well with what we have. We must move from being supplier-driven, to be driven by business needs.

        I once had a major debate with the IT manager running a big international company, boasting that he had converted from NT to W/2000 in three days on 15.000 desktops – one week after release. He was even highly praised in the press. In my opinion he was (and is because he did it again with XP) a complete idiot, taking a very high risk with the company, which really is relying on IT to work, having many laboratories and factories spread worldwide. I asked what was the business purpose, would it add to the bottom line in any way? He could not answer, and finally admitted that they were a Microsoft test installation. Microsoft benefitted by all the press hype about how easy it was, his company took all the risk.

        I did a survey of what the common office user needed in an OfficeSuite, and it is about 5% of Microsoft Office, and about the same percentage of StarOffice or OpenOffice. Most would be better off using IBM Text Assistant costing 10 bucks, and running on a 286, and not having any ‘wizards’ trying to ‘help’.

        I still use Office/97 – at home it is StarOffice on Linux, even if I as an IT manager have seen all versions up to Office/2003. Company uses a newer version (Office/XP) due to management beleiving everything coming from MS, but there’s no real need to use the newest for the common user. Documented fastest wordprocessor is WordfPerfect 5 for DOS, which leaves anything behind in the hands of a user who knows it.
         
        The GUI is just too slow. We did the tests when changing from a ‘green terminal gui’ to a ‘modern’ gui in the ERP system. Same operation took between 2 and 3 times as long time in the ‘modern’ version than in the ‘obsolete’ version, main culprit is the stupid mouse. But I admit the displays were pretty, and users loved it. Top management accepted a factor 2 performance drop, same guys who lay off staff faster than you can say ‘fired’ when pressed on the budget.

        I still have my old 75Mhz Pentium powered OS/2 Warp PC, and it is ready for work faster than my 3Ghz 1GB RAM 200GB disk desktop, and it will do anything the current PC does. Configuration figures that a MAINFRAME guy wouldn’t even dream of 10 years ago. This under-the-desk-mainframe we use for writing letters connected to a network. A configuration like that in a mainframe or iSeries, would run a decent sized bank.

        We have fought the PC’s for 25 years now. Time to say goodbye and use something else – anything else.

      • #3199794

        The End has been Nigh For Some Time

        by pharus ·

        In reply to The End has been Nigh For Some Time

        Absolutely true. The question is: how do we convince the IT community at large to keep using current technology, i.e. hardware and software? We primarily have 2 corporations, namely Intel and Microsoft forcing their products onto the market through excellent marketing. I’ve actually started to wonder what is Microsoft’s main line of business…is it software development or is it marketing?

        MS Vista is on its way, but yet I have clients happily still working on Win 98. How do we get the rest of the world to be satisfied using Win XP and MS Office XP (better yet, OpenOffice)…?

      • #3199785

        The End has been Nigh For Some Time

        by abrogard ·

        In reply to The End has been Nigh For Some Time

        Yes, this is all correct. There’s obviously something very, very wrong. Even the smart guys aren’t being smart. Software such as eMule could be delivered to the user as a collection of separate functions to be bolted together as an earlier poster intimated, to make up the application performing in the way you want. Then if there’s problems with it – as there are with eMule, crashing on XP – any user would have a better chance of knowing which function, what part of the programme, is causing the crash.  Or it could, perhaps, even have its own debug mode built in – peppered with exception handlers providing information. But instead what does it have? A user group, a forum, where the dedicated experts claim the fault cannot be eMule’s, it must be MS XP at fault. End of story.
        How about my favourite complaint – the command language v DEC DCL?  Years ago I ran a VAX site and used DCL extensively. What a beautiful, useful tool. Nothing like it that I know of in the Windows world. Why not?

        How about the plethora, the myriad, the multitude of files and filename extensions we have nowadays? But how slickly can we work with them? Not at all.  Once I used to be able to filter my directory displays for only these or those files – no more, now I must go through this whole cumbersome ‘search’ rubbish.

        Once I wrote in Clipper and added C functions when I needed them and even a little Assembler when I needed it – and, as the previous poster mentioned – I could do anything I wanted with my pc.  Nowadays I can do nothing.  Oh, I can run an ap. I can write an app with Delphi or somesuch, but that’s nothing like the power I used to have. Aren’t we supposed to have more power now, not less?

        Yes, there’s something badly, badly wrong. A few months ago I got my first 200gig drive. Unheard of sizes. Now last week I got another 300gig drive, because 200gig wasn’t enough. A 10gig system partition is not big enough. Say hello to software bloat.

        Yes… there’s something terribly wrong…

        But… there’s something right, too…

        My 200gig drive is full of movies I’ve backed up and made myself and edited myself.  I couldn’t get anywhere near this functionality before.

        My processor is 3gig and it often works to the maximum transcoding and such – i.e. there’s a need only now being met.

        I am aware of what is going on in the world across all national barriers and I never read the newspaper.

        I am in touch with (potentially) nearly everyone in the world – all the common people, that is, not the ‘uncommon’ people, who remain cut off from us, running and destroying our lives from on high – this is power to the people, dawn of a new age that the dinosaurs just can’t see, can’t understand.

        I get radio broadcasts from all around the world.

        I don’t get tv and don’t want it – tv is exposed as a ridiculous white elephant. We can see now that tv has modelled itself on the hollywood screen where it should be modelling itself on the computer screen… but, of course, it wasn’t to know that prior to the computer (the pc that is) and now it is too late, who needs it?

        I keep records of all transactions with government and business and many times have saved myself and family trouble and expense because of these records – i.e. these computers are empowering us, we are not the prey we once were.

        I haven’t been into a bank in months and I look forward to when my banking moves offshore into netspace and I’m free of national boundaries.

        My common humble human interests I now post for the world to see on Youtube and my blog and my webpage and such and I become part of the humble human race presenting the same face, sharing the same interests manifestly, showing ourselves and knowing ourselves so’s we are more and more proof against being misled.

        And there’s your usability, my friend. The computer is usable. The common people are using it. But they aren’t using it the way you, as an expert, as a programmer, as an insider, might think they should use it. But that doesn’t matter. Don’t be misled into thinking they aren’t using it, can’t use it, that it is unusable.

        They use it on blogs, on personal websites, on p2p, on file sharing, on forums and I don’t know what else…  And they are callously indifferent to software bloat, the general interface or anything at all besides what they want… they just go for what they want… and they get it.

        The human race is manifesting a global consciousness, a global presence and it is doing it via the web and the pc and it is all growing daily more sophisticated. Don’t worry. It is getting better.  But they, the people, are leading the way, not you and I, old programmers and know-it-alls from a bygone era….

        I think.  🙂

      • #3199765

        The End has been Nigh For Some Time

        by admin ·

        In reply to The End has been Nigh For Some Time

        You’ve correctly made some astute observations and fingered some problems, but you also have some misplaced expectations:

        1. Intel is a component maker. That is all. The one who makes the bolts doesn’t make the derrick. You have not been “playing Intel’s game” because it has not been Intel’s game to play.

        2. Computers are only machines, not living organisms. Screwdrivers turn screws. Don’t expect a screwdriver to plant the field. Computers are not going to do anything for you that you don’t need done in the first place, and if you do need it done, of course there is a variety of ways to do it, some of which may not have been discovered yet.

      • #3199745

        The End has been Nigh For Some Time

        by 1tnfrench ·

        In reply to The End has been Nigh For Some Time

        Oh is this blog sooooo true!

          The information age has been reduced to just another sales portal.

        The world?s largest library has been seduced, raped and prostituted (and I work in eCommerce – ‘sorry)

        The effectiveness and efficiency of the modern software world continues to increase the user-friendly aspects to simplify and foolproof systems and software.  Unfortunately, the target audience (the novice user) is going the way of the dinosaur.

        As an analogy, most of us know how to ride a bicycle.  But now we need helmets, pads, wind resistant fabrics, carbon fiber, exotic alloys …  I can either spend 5 bucks at a yard sale or 5 grand at a bike boutique.  The function is the same.  My old (real old) ten speed works just as well, keeps up with and is just a comfortable as the high-tech, custom-built, Trek I got as a gift.  The old ten-speed also allows me to fill the tires at the local gas Shoppe, hop curbs, and not worry about rocks, cracks, and potholes.

        For the “Form follows function” crowd — Newer better bigger does not make sense.  Wizards, assistants pop-ups, mouse-overs, shadow icons — argh!  Let us re-invent the world one more time!  The computer industry has spent the better part of the lat 25 years taking sound established principles and methods of operation and contorting them melding them and confusing them. (Remember when SDLC meant Synchronous Data link Controller?)

        Now enter Media and Entertainment.  Music — I have a home theater (An old laptop is the music juke box).

        Video — I have a network connection [Cable, Sat] that is digital and works almost as good as analog!

        My solution is less direct.  Prices are such the single purpose, dedicated resources make sense.

        A really nice mid-90’s network is now dirt cheap.  Functionality has only minor changes.

        Uncle Sam still accepts the tax numbers off the ‘386 laptop (hack that through the air gap firewall!).

        One system is the firewall, another, the fax server, a proxy and then the SOHO and home network.

        Security is of my own design and will stay that way.

        I will not be upgrading to Vista or the new Office suite (idon’t need the overhead).

         Most of the “new” apps I have seen are add-in or advanced macros for older apps.

        Now if I could just keep people from stealing my bandwidth…

        tnfrog

      • #3199708

        The End has been Nigh For Some Time

        by caverdog ·

        In reply to The End has been Nigh For Some Time

        I have a Pocket PC phone that allows me to carry 4 movies, 100 songs
        and 100 books in the hidden pocket of my Dockers Mobile Pants.  My
        computer will Rip 13 DVD’s to my Playstation Portable (which also plays
        amazing games), and my Media Center PC lets me pause live TV, record
        any program I want and copy it to my phone or PSP.  I was a
        secretary 15 years ago (WP51 on 8088), and now I am a Security Engineer
        (Contractor for the Navy).  I mostly write policy, and for large
        documents with diagrams, WP51 cannot compete with Word 2003.  For
        the person who lamented the keyboard shortcuts, they are still there
        and you can set your own with macros.  As to highlighting text
        with the mouse, keyboard text highlighting with the shift-key is much
        easier and than Press F4 and hope.  I’ve worked many places, and
        the places where the users hate their computers it is because there is
        no training or the System Administrators don’t ensure
        Availability.  Word will now do versioning, Outlook does amazing
        things if a user is trained and the System Administrator knows what
        they’re doing.  This site has much active content and yet runs
        well and is easy to use.
        For the person lamenting the helmet on their bike, you’re an
        idiot.  Helmets are for safety, something that wasn’t a concern
        and alot of kids got more hurt than they needed.  As for the rest
        of the bike, I can take terrain on my mountain bike that your 10 speed
        couldn’t think of.  However, your 10 speed is just as good for
        getting a coke from 7-11 down the street so if that’s all you need you
        should never upgrade.  I have 8 computers at home ranging from a
        PowerPC G3 (a screemin’ 233mhz!) that still does everything it used to,
        to a 2.8GHZ that I use for gaming.  All have their uses, but don’t
        pick on my 2.8 just because it’s beautiful.

      • #3283859

        The End has been Nigh For Some Time

        by lelliott ·

        In reply to The End has been Nigh For Some Time

        I so agree. As a usability engineer, I am a defense lawyer for invisible clients at development meetings. It’s not the users and it’s not the developers. There is no communication between the two and often no user testing.

        The developers have fabulous ideas. The users have realistic needs but are never heard. It’s better in some companies and worse in others.

        Companies are starting to figure out that poor usability costs. When they realize it is costing them, sometimes it’s too late to do the usability testing. All you can do is fix it in the next release. It takes blogs like this and actual users to demand usability testing. There are usability engineers, championing the cause, in nearly every computer/software company. We test actual users on the product before release and produce recommendations to make things friendlier. Sometimes we make a difference and that difference is invisible. It’s easier to spot poor usability than good usability.

        Companies need to hear this from you. Usability must be a requirement from the customer at all levels.

        For more on usability see; http://www.useit.com/alertbox/991114.html and http://www.baddesigns.com/

        For more on usability engineering; http://www.upassoc.org/ and http://www.hfes.org

        usability forever…

         

      • #3209071

        The End has been Nigh For Some Time

        by kdnoel ·

        In reply to The End has been Nigh For Some Time

        Well said I have been along for the ride as well… I will admit my first “pc” was a Model 1 from Radio Shack with a cassette for storing my programs – we were very creative with the 16k memory limitation.

        Old Dog looking for new tricks!

         

      • #3209048

        The End has been Nigh For Some Time

        by betelgeuse68 ·

        In reply to The End has been Nigh For Some Time

        I don’t quite buy the argument that fancy new interfaces will solve all end user problems. One word that was often used to describe the advent of the Macintosh as well as Microsoft Windows (though version 1.0 was just flat out awful) was “intuitive”. Intuitive refers to intuition. One definition of intuition is “having a quick and keen insight.” But were is this insight derived from? From a superstructure comprising a knowledgebase of one’s frame of references. If you have never worked in a given context, intution does not spurt from the ethos as it were magic, far from it. I remember when my sister back in college needed to write a paper. I had Windows 3.x on my machine and fired Microsoft Word for her. She said, “Well how do I use this?” My response was “It’s just like the Macintosh” trying to quickly disappear from the situation and leave her to her task. All I got was a blank stare. I had made the assumption that everyone has been exposed to graphical user interfaces and had some frame of reference, i.e. intuition, but that was simply not the case.

        Einstein once said there are only two things that are infinite, the universe… and human stupidity. And not necessarily the former.

        People ensconce themselves in the routine and subconsciously erect barriers that no amount of software development will overcome. Yes, perhaps some eager grandma in a UI study will “overcome” some barriers, but that person is representative of the norm.

        My parents still cannot program their VCR and my expectation is they
        never will be able to. They grew up in a different time and simply do
        not have the collective experience where the light bulb of intuition
        turns on.

        This problem will subside substantially when the generation(s) that did not grow up immersed in technology pass into the afterlife.

      • #3205568

        The End has been Nigh For Some Time

        by obviator ·

        In reply to The End has been Nigh For Some Time

        i couldn’t agree more.  i have over 400 users on several different subnets and only a few could actually use anything more powerful than a 486.  E-mail, writing letters, memos, etc., surfing the web, nothing that Win3.1 or DOS for that matter couldn’t do and do better than WinXP on my current “power” machine (3.2GHz, 1Gig RAM, 80Gig HD).  None of my users can do a letter in Word to more than 1 person/entity without typing the letter at least twice.  i could do mailmerge with Wordstar? 3 20 years ago with more ease than i can with MS Word 2003 and MS Access require now.  Plus it was easier to teach end users how to do it themselves.  As a Sysadm i enjoy things like SMS that let me stay put and help out remote users, but walking over puts my face in their memory (and exercises my butt too!).  As far as i can see there is nothing new out there, even FPS games haven’t really progressed beyond Doom 3D, the play is basically the same there’s just more bells and whistles.  Only conviences have occurred (Look Ma, no more dip switches!), no real operational changes or enhancements.  It’s all eye candy.

      • #3205514

        The End has been Nigh For Some Time

        by justin james ·

        In reply to The End has been Nigh For Some Time

        “This problem will subside substantially when the generation(s) that did not grow up immersed in technology pass into the afterlife.”

        This is sad, true, and yet so deliciously morbid… I love it! I want to go to my boss one day and tell him, “Well, we just have to wait for all of the users over the age of 30 or so to die off, and then all of the problems will be solved. Shall I ready the neutron bomb sir?”

        J.Ja

    • #3166680

      Oracle is the T.O. of Software

      by justin james ·

      In reply to Critical Thinking

      Oracle is the Terrell Owens of software: you absolutely despise playing with it, but its performance is almost good enough to make up for its unforgivable manners. Almost. Today I had the misfortune of having to perform some development work with an Oracle database. I had not touched the Oracle client installation on my desktop machine in months, except to update the Oracle .Net tools a few weeks ago. I updated them, but had not used them. When I tried to make a connection to my development database, the drivers were just giving me a big “New York Greeting.” I never did resolve the problem; I kludged together a work around. I ended up just using a different system on my client’s VPN and working with their development database. I put two hours into resolving the problem, and simply did not have anymore time to waste.

      Over the last year or so, I have spent well over 40 hours wrestling with Oracle. That is a full 2% of my yearly productivity devoted to nothing but trying to get their “tools” to stop giving me grief. I can easily chalk up at least another 20 hours over the last year trying to find error in SQL statements that Oracle does not help me find. “Missing right parenthesis” is not very helpful when one is doing a JOIN on 106 subqueries. MySQL is nice enough to show me exactly where the problem is in the error message.

      And then there is the sewer known as “sqlldr.exe.” Another complete waste of time to deal with. Again, comparing to MySQL, I have to write a script file to do what MySQL can do in one “LOAD DATA INFILE?” statement. Even the command line for sqlldr is complicated. For whatever reason, my current working directory needs to be the data directory, or else the files do not seem to be found.

      Oracle Enterprise Manager is pretty foul too. It has menu and toolbar items that tell me to go use “Oracle Enterprise Manager Console,” which, despite its name, is not the same program, but a Web based app. The software itself barely works. It leaves a Java console window in the background, which is constantly spitting out various uncaught exceptions. The software has a positively miserable interface. Its worst habit, though, is dropping the first character typed into any text box. I have never seen that kind of behavior in any other application.

      Even the Oracle client is a shrew. Each time you install an Oracle product, it tramples all over the file permissions on the client, rendering it unusable. Due to a quirk in Windows’ file permission check system, after correcting the problem you need to reboot your PC. Oracle is the only database that requires a monstrosity of a client; it is like Novell Netware circa Windows 3.1. Do not even get me started on the idiotic tnsnames.ora file contraption. Talk about a manageability nightmare! Every other database out there just needs to install a small ODBC driver (or JDBC driver in a convenient JAR file, for Java developers), make an ODBC connection, a poof, you are connected. Not Oracle. It needs to turn your computer into a mess, and you still need to create the ODBC connections anyways.

      Dealing with the Oracle corporation itself is a David Lynch film at best. Their Web site makes it clear to me that they are not interested in helping me. The search system on their Web site is horribly broken which reflects quite poorly on their database products. Trying to find updated tools and drivers is a hassle. Not that they update their tools very often, Oracle has a poor track record at bug/security fixes. And again, when you do update the tools (using their awful Java installer, which takes two minutes just to start, and always gives me grief about where to install stuff), it destroys your existing setup, requiring a three hour trip down search engine lane to fix the problems.

      I am not even sure if Oracle’s performance is that great. Microsoft SQL Server, when properly configured, can be pretty fast, close to and occasionally exceeded Oracle’s. MySQL is catching up very quickly, and PostgreSQL is no slouch either. And all of them are infinitely easier to deal with than Oracle. Oracle’s dominance of the database market is proof positive that the people who make IT purchasing decisions are not the people who have to live with those decisions. Why is it that there are no MySQL, PostgreSQL, or SQL Server consultants billing out at $250/hour, but there are Oracle consultants doing so? What does that tell you about Oracle?

      Oracle just does not get it. When your developers tell you that using Oracle over another database makes development take 10% longer, or your DBAs cost you 25% more, or management on users’ desktops is 250% more difficult, or integration costs are $250,000 versus $10,000 for anyone else, eventually you are going to start to wonder if a 5% performance edge is worth it. I do not think it is. In its current state, I would not recommend Oracle to anyone. I simply fail to see its advantage, and I have yet to hear from a developer who likes dealing with it. The only people I have talked to who like Oracle are billing out at divorce lawyer rates to integrate and support it.

      I would love to hear your Oracle horror stories, or any defense of them.

      J.Ja

      • #3166529

        Oracle is the T.O. of Software

        by mark miller ·

        In reply to Oracle is the T.O. of Software

        The last time I dealt with an Oracle database was in 1999, Oracle 7. I had been working with it for about 2 years.

        Your story about the error messages sounds very familiar. I used to run into that a lot. I can’t remember what the errors were, but they never told me where in the statement it was. Sometimes the error messages were cryptic, too. I just got used to them over time and got quicker at finding the problems.

        I do remember an incident that was royally frustrating. This was the one time when I felt Oracle failed me. We were working on a new project with a deadline that was closing in on us. We had Oracle 8 running on a Windows server, but our transaction server software only ran on Unix. Either there was no Oracle 8 release for SCO (the kind we had), or we didn’t want to spend the money to get it. We figured that Oracle 8 should be backward compatible with Oracle 7’s development tools. Wrong! This descrepancy was not obvious though. I tried out some tests, trying to get some embedded PL/SQL to work, but it kept failing for some mysterious reason. All I was using the embedded PL/SQL for was to call a stored procedure from my C code. I had to map binary values from my C code to PL/SQL values, and then call the procedure. It kept telling me “type [typename] undefined”, with the []s containing some PL/SQL type I was using, which was defined inside some scope in the database. I knew I had the scope right. I tried running the same call from within an Oracle command line, and it would always work. I kept looking at it, and couldn’t see where the problem was. I finally got desperate and asked on an Oracle newsgroup about it. The others on there were very helpful. What we eventually found out was that the Oracle 7 development tools were not totally compatible with Oracle 8. What I ended up doing was creating an embedded Dynamic SQL call, which contained PL/SQL code, within my C code and then passing it wholesale to Oracle 8 to compile and run. That worked. The problem with that was if there were any problems with the PL/SQL, we didn’t find them out until run-time. Typically the Pro*C pre-compiler would check for consistency and output compile-time errors if something didn’t match. There were dozens of transactions that had to be done this way. I could see it was going to be a maintenance nightmare going down the road.

        Back then Oracle Server had no UI tools, except for a couple decent ones on Windows. Everything was done on the command line on the server. This improved with the last update to Oracle 7, when they finally added a decent UI “control panel” for the server that worked reasonably well.

        I remember that you had to install “Oracle Client” on Windows machines, and then hook up to that with ODBC.

        One thing that was often a pain in the butt for me is when it came time to do a tape backup of the SCO Unix system it was installed on. I know that there was a way to configure Oracle 7 to create logged backup files, but I didn’t bother setting that up. We were a small company, and so shutting down the database for a while was no big deal, since it was done at the end of the day when no one would be accessing it. That’s the process it would go through. I wrote a script that would pipe commands into sqlcmd (I think it was called) to shut down the server, do the backup with a piece of third-party software, and then bring the server back up. The problem was if anyone had a database connection open at the time, the server refused to shut down until that connection was closed. I don’t quite remember how this typically happened, but it was not because someone was using the database. They may have been doing some software testing with a product we were developing that accessed the database, but the instance of the software had been shut down a long time ago. For some reason Oracle would think that somebody was still connected. Maybe this was a “pain point” with Sql Client. I vaguely remember that if a piece of software that was using it crashed, some part of Sql Client would be left running with a connection to the database. You would either have to kill it off in Process Manager or kill some DLL instance that was still up and running. I forget. Our development and test machines were running Windows 9x at the time. Often what I had the script do is watch to make sure Oracle Server had shut down, and if it hadn’t it would just issue a kill -9 (kill unconditionally) command on it, and proceed. No harm done.

        I can remember that the documentation that came with Oracle Server was atrocious. I remember asking someone once who was more experienced with Oracle why this was. He told me “Their business is consulting and training.” That said it all. In order to learn how to use their software, you had to pay someone more knowledgeable to teach you. We didn’t take that route exactly. I can’t remember which publisher it was, probably Osborne Press, but they published an excellent series of books that explained just about everything a beginner would need to know about Oracle. My boss found it. These books were my savior. MUCH better than Oracle’s own documentation.

        I remember once as well that applying an update to Oracle Server was a huge pain in the butt. We were installing it on SCO Unix, and it required something that might’ve been called a “virtual partition”. It was some special type of partition (maybe it was a partition within another one?) that would grow dynamically as the data size requirements grew. Me and my boss were trying to figure out how to set that up, because Oracle’s installer wouldn’t do it. What I remember is that the installer would get halfway through and then die. Everything ran on SQL scripts. It would die because it couldn’t find some table it was looking for in the database, or it didn’t find a value in a table it needed. I think this eventually improved as well. Later on they created a UI for their installer and things worked like clockwork. Like I said, there was a “last update” that we applied to Oracle 7 that worked out. It seemed like they had finally gotten all of the bugs out of everything.

        The thing that seemed to make it all worth it at the time was that Oracle seemed so robust. It was known for its redundancy, so that everything that should get done, would get done, even if the power went off in the middle of a transaction. That’s what I was most impressed with. With Oracle you could feel safe that your data would never get corrupted by some freak accident. People have said when I tell them this, “Isn’t that what UPSs are for?” Yes, but I’ve seen UPSs fail. Over time the battery will wear out, and eventually it has to be replaced or else you have no battery back up.

        Since that time I’ve been dealing with Access and SQL Server databases. Access has its quirks, and it’s not very efficient, but it’s fine for some small projects I’ve worked on. SQL Server has been fine, once I’ve learned its features. I only had a problem with it once trying to import a database from one site to another. Otherwise it’s been much less trouble.

      • #3229660

        Oracle is the T.O. of Software

        by justin james ·

        In reply to Oracle is the T.O. of Software

        Mark –

        Thank you for reminding me about their worthless documentation. I actually forgot that Oracle has documentation, because it is that bad. I too need an Oracle book sitting on my shelf, to remind me about all of its quirkiness.

        Recently, I was working on an Excel project where the spreadsheet had an embedded SQL statement in a few sheets to dynamically pull data. For whatever reason, I could not get the SQL parameters to work with it (it worked fine against MySQL and SQL Server though!). As a result, I had to put in bogus values in the SQL command, and at run time store the original statements to variables, perform a search/replace on the command itself to swap in the right values from the spreadsheet, and then swap the original commands back in when it was done running. For whatever crazy reason, I do not consider that “elegant” or “reasonable.” In fact, it seems like Oracle is always giving me problems with parameters, regardless of the language I am using.

        You are right about consulting work being their real business. The DB license itself is a drop in the bucket compared to those service and maintenance fees. What Oracle fails to see, is that as other DBMS’s catch up to them in terms of redundancy (you are right about that, Oracle is good with even severe system crashes), reliability, and speed (and I beleive that SQL Server is at that level, and MySQL is ot far behind) then there really is zero reason for anyone to choose Oracle, unless they already have it. SQL Server is cheaper and integrates very well into a Windows environment, MySQL is free and rediculously easy and plesant to manage. PostgreSQL is rock solid, and also easy to manage. Take your pick, but don’t pick Oracle!

        I also forgot to mention that Oracle does less to support developers than anyone else. Their attitude is, “have your DBA put everything into stored procedures, and let the developers just write interface to access those stored procerdures.” That may work in the highly structured, “death by process” world of Fortune 500 IT, but it does not fly for the rest of us. In my current position, the company I work for is hired because their people do not have time to be messing with this stuff. They know and request that we do NOT use stored procedures, because they want us handling this stuff, not their DBAs. So it is crucial that I get good support in my tools. Oracle is a lousy company on the support for developers.

        I also recall finding occassional, minor differences between the Windows and the UNIX version of Oracle. I think these have mostly been resolved, but it was rather… interesting… to develop with a local development database, only to discover that the code stopped working when moved to the pre-production test environment. Very troubling indeed.

        BTW – I have seen UPS’s fail too! Two weeks ago (I had replacement UPS’s on backorder) I actually yanked the power cords for my server out of the UPS during a brownout, because the UPS was going insane due to a nearly dead battery, and it was less harmful to the equipment to just be yanked completely than to keep getting enough juice to try to start, and then failing. Next week, in fact, I will be posting a short review of the GE GT series UPS’s that I just got in, on this blog. As a sneak preview, I’ll just say this: I love them.

        J.Ja

      • #3229560

        Oracle is the T.O. of Software

        by mark miller ·

        In reply to Oracle is the T.O. of Software

        Justin:

        Re: The DB license is a drop in the bucket compared to the services and maintenance fees

        I didn’t keep track of that much. I happened to catch the price of the DB license though and I was shocked: something like $20,000-$30,000. I can’t remember exactly. And that was just for our development dbase. I can’t imagine what our clients must have paid for a production setup.

        Re: Oracle’s attitude is just have your DBA put everything in stored procedures

        Well they would do that, wouldn’t they? 🙂 That’s how they tie you into their platform.

        I only got a hint of this while I was working on it. The project where I was attempting to use embedded PL/SQL was in this mode. We only did it this way because we thought it would be less laborious. Maybe it was in the end. I believe what my higher-ups did was use Oracle Designer to lay out the schema, and they added in things like constraint checks, plus they used its facilities for implementing business logic in PL/SQL. The constraint checks were implemented in triggers. Data got passed on to stored procedures which implemented the business logic. That’s the reason why I was having to use embedded PL/SQL. The particular client we implemented did not directly interface with Oracle, but rather with a middle-tier written in C. So that’s where the interface had to be implemented, and where I ran into my troubles. It was awkward doing embedded PL/SQL. Just doing plain ‘ole embedded SQL was more straightforward. I can’t remember exactly, but I probably had more control, too. If anything went wrong I could trap the error and handle it myself. I don’t remember what we did with PL/SQL errors. I think we decided to bite the bullet and go the trigger/stored procedure route, because if we hadn’t I would’ve had to have implemented the constraint checks myself in C code. And we estimated that was going to take too long. It was decided it was better to have Designer set that up in PL/SQL. Designer did have a facility for generating VC++ and VB code, but it could only be compiled for Windows, of course. I needed it on Unix. Nowadays it can probably generate Java code, no?

        Incidently, I’ve talked to software consultants who, when I tell them the story about my old company (not naming names) and how they stored all their business logic in stored procedures, they’d chuckle and ask me for a lead. They would say, “Hey, maybe I can help them out.” They saw this done before, and in their experience it just led to a mess, one that they helped clients get out of. I’ve kept in touch with the system architect where I used to work, over the years, and he hasn’t complained about the setup. It seems to have worked out fine for them, but they decided to become an ASP fairly soon after I left. Since then they’ve had absolute control over their systems, rather than having to adapt to the customer’s system requirements.

        With all that’s been said about SQL Server 2005, and how you can write .Net stored procedures now, people are tempted to think in the same direction: put the business logic in the stored procedures. From what I’ve heard from MVPs, this is NOT what .Net should be used for in the database. Rather, it should be used in place of extended stored procedures, those cases where it used to be necessary to write a DLL and link it into SQL Server to be run from the database. Microsoft’s preferred approach is for people to implement the business logic in web services, outside the database.

        Oh, one other incident that happened to me was I was testing out Oracle Lite, the first version they came out with. Lite was a database that would run on a client system, and was compatible with Oracle Server. It could do things like carry out replication, and check for consistency when the client and server databases were synched. It was kind of impressive, too. It had a background process that would monitor the database and what was being done to it. I tried out stuff like have a program that added a bunch of rows to a table and then cut the power. No data corruption. I hadn’t seen that in any other client database to date. Its main focus seemed to be on Java. It provided a JDBC interface to the client database, plus you could put Java stored procedures into it. Again, impressive for its time. It came with an embedded SQL pre-compiler that worked with VC++, and this is where I ran into trouble. It turned out the installation was missing a header file it needed. I couldn’t compile anything. It took some prodding of the situation to figure this out. Finally I called Oracle’s customer support. They were using Indian tech help even back then. I got this nice Indian woman on the other end. I described the problem to her. She said she would get back to me, and she did. I was impressed with that. She said that the right header file was at an FTP address, and gave it to me. I had a little trouble with it (she may have given me the wrong location), but she helped me through it. I finally got it, and that was it. She explained that the reason she couldn’t handle problems directly was that the embedded SQL pre-compiler was written by a third party, not Oracle. So she had to contact their team to get anything done. that was informative. I said goodbye and hung up. I tried compiling again, and this time I believe I found another problem. I can’t remember what it was. So I called back and got the same contact (they have you use an incident number). She helped me through that. I said goodbye and hung up. Finally I got everything working. I was a little surprised. This same Indian lady called me back a day later, and asked if the problem was resolved. I was busy so it kept slipping my mind. She called me back something like 3 times over the course of a week, asking if the problem was resolved. Finally I called her back and said yes it was. I really didn’t expect this level of service. It was nice. IMO it wouldn’t have even been necessary if they had made sure all the pieces were in the distribution before they sent it to me. I’m sure they charged the time to the service contract account we had with them. If we didn’t have that in place I guess I would’ve just thought Oracle Lite was a POS and we would’ve just tossed it. We didn’t use it while I was there, but I heard they used it later. Once I got it going I wanted to use it, because I thought it would make our programming lives easier. Before then we had been using a client API-based database, and it was difficult to do things with it like joins (we always implemented the joins in our code, which was a pain). With the embedded SQL compiler we could just put the join in a SQL query. That would’ve been nice, but oh well.

        In a way Oracle was ahead of their time with this. I don’t think Microsoft had come out with MSDE yet, and they had nothing for Windows CE. Now they have SQL Server Express, which will work in a client environment, and SQL Server for Windows CE. When that first came out people thought it was so bizzare, “SQL Server on Windows CE??? How the heck does THAT work??” 🙂

      • #3229430

        Oracle is the T.O. of Software

        by mulkers ·

        In reply to Oracle is the T.O. of Software

        I have been using Oracle DB in Java applications and have never had that kind of problem you mentionned.
        The Oracle driver is a native JDBC driver, you just have to add the ojdbc14.jar in the application classpath, that’s it.
        There is no Oracle client installation needed. I have never had to read the documentation of this driver.
        So when you mention that “Every other database out there just needs to
        install a small ODBC driver (or JDBC driver in a convenient JAR file,
        for Java developers)
        “. Well, I don’t understand that because this is exactly what I do with the Oracle JDBC driver and it works.
        I am not an expert in .NET but I know there is a specific Oracle driver for .NET, check it out.
        Best regards.

        Robin Mulkers

      • #3229424

        Oracle is the T.O. of Software

        by sss ·

        In reply to Oracle is the T.O. of Software

        Oracle patched is a stable DB and runs fine. I think its CBO optimizer is great. PL/SQL is a adequate language. RMAN is a good backup and clone utility. BUT most serious users need to buy 3rd party products like Quest’s TOAD and the Oracle use becomes a dream be you a programmer or a DBA. Oracle is showing serious efforts to plugin the need for 3rd party by Java products but they do not match the finish of TOAD. I think Oracle’s commitment to Java is admirable but costing it dearly. TCO of Oracle is high because you need TOAD etc.

        SAP on the other hand has is pragmatic and does excellent Windows and MS Office integration that is painless.

        One major grouse I have is Oracle trace. TKPROF does not see BIND variables. Metalink has a some stuff but not in GUI. When I saw SAP’s ST05 Oracle Trace it was an amzing interface to Oracle trace. Made me think that SAP knows Oracle better than Oracle!  Without a good SQL trace in GUI Oracle is painful when debugging.

        Microsoft SQL server SQL trace is fabulous. Why can’t Oracle see ST05 of SAP & Microsoft SQL server SQL trace and do something that proves that Oracle is not stuck to Unix command line era or creaking Java attempts.

        Oracle should buy Quest & subcontract SQL Tracing to SAP! 

      • #3229367

        Oracle is the T.O. of Software

        by justin james ·

        In reply to Oracle is the T.O. of Software

        Robin –

        I will say, it has been a number of years since I worked with JDBC & Oracle. That being said, it was a nightmare. It took well over 8 hours to get my development box up and running to the point where I could access Oracle, and the client was a major part of it. The Oracle tools for .Net completely and utterly do not work out of the box, and sometimes require hours of troubleshooting to correct them. Furthermore, installing them hoses the existing Oracle client installation in my experience (like last week). So while it sounds like Java developers finally have it good, the rest of us still suffer. I may add, when I was doing Java development, the JDBC drivers had a critical flaw (I do not remember the details off hand) that Oracle had been aware of for over a year and had not been corrected.

        To SSS –

        I agree, TOAD is good. Lousy interface, difficult to use the software, but it does have everything including the kitchen sink in it. It is a real shame that as you mentioned, it is not free. Other DB vendors give you free management tools that, while not quite as comprehensive as TOAD is (for some of them, at least; Microsoft SQL Server management stuff is just as inclusive from what I have seen), have the features that I need 90%+ of the time, while being free too. There is a reason why you do not see “TOAD for MySQL” or “TOAD for SQL Server,” it would be hard to sell that product at the price that TOAD sells for.

        J.Ja

      • #3230010

        Oracle is the T.O. of Software

        by debuggist ·

        In reply to Oracle is the T.O. of Software

        I’ll never forget how, in the mid 90’s, I had to use a 3rd party ODBC driver (Visibroker) with Oracle, because it didn’t have the bug that Oracle’s driver had.

        I’ll never forget when I saw a non-production setup of Oracle Parallel Server go down (circa 1999). Both instances ran on UNIX.

        I’ll never forget an early version of Oracle Financials written in Java (again, circa 1999). This was an app that gave Java apps a bad name. It had the positions of the ‘OK’ and ‘Cancel’ buttons switched on the dialogs. It was slow and an example of poor usability design. Most of its problems were not due to Java.

        Then there’s Oracle Forms…

        Generally, Oracle’s DB is rock-solid, but any other software they make is just painful to use. People make fun of MS, but their software is better.

      • #3229998

        Oracle is the T.O. of Software

        by madestroitsolutions ·

        In reply to Oracle is the T.O. of Software

        Totally agree with you. In my last job I had to maintain an application that used Oracle as backend and I cannot find words to describe my frustration with this product, specially their tools. They were so cumbersome and riddled with bugs that I ended up making my own clone of MS SQL Query Analyzer to able to debug stuff. Documentation I won’t even get into >: (

      • #3229972

        Oracle is the T.O. of Software

        by justin james ·

        In reply to Oracle is the T.O. of Software

        “Generally, Oracle’s DB is rock-solid, but any other software they make is just painful to use.”

        Doug, that sums it up perfectly. Even within the DB itself, there are non-performance related issues, like the useless error messages it kicks out. The copy of Oracle installed on my development PC was slowly filling my drive with core dumps, until I turned it off. If I had not noticed that disk space was disappearing at a rate of 1 GB per week on a machine where practically nothing gets stored locally, I never would have noticed it. I can imagine that the same thing would go undetected forever on a server. Oracle was not nice enough to dump anything to the event log either.

        J.Ja

      • #3201908

        Oracle is the T.O. of Software

        by bscalzo ·

        In reply to Oracle is the T.O. of Software

        I see it mentioned that there is no Toad for other database platforms because no one would pay the fees. Well there are in fact Toads for SQL Server, DB2 and MySQL. For MySQL – it’s 100% free for the full version. The other platforms (like Oracle) offer both a freeware and full-blown version. The other platform freewares simply require you to redownload it every 60 days, but they obviously lack some of the features worth paying for. The other platforms’ Toads are priced based upon market conditions versus the value increased developer/dba productivity. So Toad for SQL Server is cheaper than Oracle and DB2 – and again, the MySQL version is 100% free.

        But what often really amazes me is the reference to price. Let’s say Toad on whatever platform only makes you database work 20% more productive. Assuming that you make the average $80K per year – then Toad is saving your company $20K per year versus it’s meager $897 price. It’s a real bargain in those terms. But people just like to complain about software prices like it’s there money being spent to buy it (and not their company’s money). Even private consultants are on record defending Toad’s price – saying for those who only see the cost, then raise your hourly rate to compensate for your increased productivity.

        And I too love freeware and open source software. But for those of us in the software industry, we too have house payments, car payments and kids to put through college. If Toad were free, then how would Quest afford to pay for developers? I just wish people would be fair on this topic. Let’s say you work for an insurance company – why can’t that be free too? My point is people love to pay for everything but software – and berate those of us in the industry like we’re eveil if we charge. But they never look in the mirror and ask the same of their own industry. It just seems very unfair and unreasonable.

      • #3201902

        Oracle is the T.O. of Software

        by justin james ·

        In reply to Oracle is the T.O. of Software

        Bert –

        I had no idea that there was a TOAD for other DBs, I have never seen it advertised. Thanks for letting me know though, I will check that out.

        “But what often really amazes me is the reference to price. Let’s say Toad on whatever platform only makes you database work 20% more productive. Assuming that you make the average $80K per year – then Toad is saving your company $20K per year versus it’s meager $897 price. It’s a real bargain in those terms. But people just like to complain about software prices like it’s there money being spent to buy it (and not their company’s money). Even private consultants are on record defending Toad’s price – saying for those who only see the cost, then raise your hourly rate to compensate for your increased productivity.”

        I know exactly where you are coming from, and unffortunately, that is not entirely how things are looked at by the decision makers. In the mind of many decision makers, any capital expense is going to end up as a line item somewhere that may need to be justified to a boss, a stock holder, an internal auditor, or whoever. “Cost cutting” is the word of the day. Wasted time and productivity never shows up on a spreadsheet anywhere, so it is “free” as far as the company is concerned.

        In 2005, I left a company where people in my building has Pentium 1 systems running Windows 95, to perform heavy-duty work (2 – 3 JVMs open at once, Outlook, Excel, Internet Explorer, a few other apps, all while groaning under the weight of McAfee A/V). I had to come in a full 15 minutes before my shift started to ensure that my computer was up & running and that I would be ready to work at the shift change, and even then the computer would sometimes take 20 – 30 minutes to be up and running. When you figure 15 minutes of overtime, times 5 days a week, it would take only a few months to buy a replacement. But a replacement would be a line item expense, and my wasted time (not to mention needing at least 1 more person per shift, because the equipment was so slow) never looked like an expense, because the systems had always been that slow! Anyone could make a spreadsheet showing a 1+ person headcount reduction plus an ROI of well under 6 months just to replace those PCs, and actually acheive that ROI, yet the company was so focused on cost containment to please the stockholders that it simply would not happen.

        I see this kind of thing all of the time. It stinks. This is why, despite the fact that paying for software as opposed to using a free software that came with the product happens so infrequently. It was not until recently that I worked for any companies that ever bought any equipment or software for employees to use. I have worked for places where if it was not free, it did not get used. It stinks, but that is how it is.

        J.Ja

      • #3201889

        Oracle is the T.O. of Software

        by bscalzo ·

        In reply to Oracle is the T.O. of Software

        Fair enough – it is what  it is then. But working in the software industry – it just hurts when people say we’re too expensive. I made more money working as a DBA than a software guy. So it just feels like they’re saying we’re overpaid – which we aint.

        Please do check out the other Toad versions. But do note that they are very early in their life cycle – meaning they’re eight years behind Toad for Oracle in terms of features. But they’re pretty darn good right out of the gate – and getting beter every day now 🙂

      • #3230798

        Oracle is the T.O. of Software

        by debuggist ·

        In reply to Oracle is the T.O. of Software

        I’m trying TOAD for MySQL now. I just downloaded it a few days ago. It’s a little different than SQLyog, but I always loved TOAD for Oracle, because it was so much better than Oracle’s tools.

      • #3199880

        Oracle is the T.O. of Software

        by mark miller ·

        In reply to Oracle is the T.O. of Software

        In the mind of many decision makers, any capital expense is going to end up as a line item somewhere that may need to be justified to a boss, a stock holder, an internal auditor, or whoever. “Cost cutting” is the word of the day. Wasted time and productivity never shows up on a spreadsheet anywhere, so it is “free” as far as the company is concerned.

        Hi Justin. Wow. This explains a lot! I’ve wondered about this over the years. Many years ago I worked at a software solutions consulting company, and while they weren’t totally allergic to buying software, we wrote a lot of software in-house that could have been bought and customized. In my own estimation, in several cases it would’ve been cheaper to buy, considering that they were paying us a few thousand dollars a month to do our work.

        I started working there before Windows 95 came out. My machine was designed for DOS. It was upgraded to Windows 95 when it came out, and one thing I noticed was that anytime a window would need scrolling, the window update was SLOW. The video card was old so it couldn’t implement video acceleration. This got painful when I got switched over to Unix development. I would telnet in using some terminal software. Many times the terminal window would scroll when I displayed a list, like a directory listing, or ran some utility that output a large list of data. The operation would be done, like a minute ago, but I couldn’t see the end result, because my window was still catching up to the output. Aaagh! I pleaded with my boss on several occasions to purchase a newer video card for my machine, but he resisted. I told him it was slowing me down, affecting my productivity. The problem was the motherboard was old too. So no video card worked with its slots. It would have to be upgraded as well. Finally, after several months, I got my upgrade. I was SO happy! The screen was quick and responsive. But I wondered why it took so long. I think what you said explains it. That, and why we kept “reinventing the wheel” when we could’ve bought the solution.

      • #3199703

        Oracle is the T.O. of Software

        by justin james ·

        In reply to Oracle is the T.O. of Software

        Mark –

        Happy that I could help you with that! I wrote a lot more about the subject in April. This particular issue is not IT specific, it is MBA specific. MBAs, particularly those who are heavily compensated based upon certain performance metrics (revenue, growth), etc. are the root cause of a lot of frustrations. Objective metrics drive bonuses, not subjective opinions (such as “the long-term sustainability of this company”), sadly.

        J.Ja

      • #3199911

        Oracle is the T.O. of Software

        by fmteter ·

        In reply to Oracle is the T.O. of Software

        Loved the comment about Oracle consultants billing at divorce lawyer rates.  I was a divorce lawyer several years ago, but switched into Oracle consulting because the money was better…

        –FMT–

      • #3200760

        Oracle is the T.O. of Software

        by vmirchan ·

        In reply to Oracle is the T.O. of Software

        would you rather deal with Oracle apps – see

        This is not Six Sigma

    • #3229788

      Review of the GE GT 1.5 kVa UPS

      by justin james ·

      In reply to Critical Thinking

      The company that I work for had a pressing need for some battery backup. We were adding new servers with more power draw, and our existing UPS just was not cutting it. One of the biggest challenges that we faced was a lack of 20 amp (or bigger) outlets. Our office is only equipped with 15 amp outlets, no one ever expected our needs to grow as quickly as they have been. As a result, we were limited to about 1.5 kVa worth of power.

      Originally, due to cost considerations, we were going to stick with some low end, line interactive units (the Powerware 5120, believe). One of the reasons we suddenly needed so much power though was our new server, a dual Xeon 3 gHz system with 4 SATA II drives in it, rated at 700 watts. Although it was fairly obvious that the unit would not be drawing anywhere near 700 watts, 300 ? 350 seemed to be a reasonable estimate of the power draw of that server. Add to that the load of three other servers, and a single 1.5 kVa system would be quite near its maximum capacity, and well over the recommended capacity.

      We decided to not go with the Powerware unit, and instead opt for a pair of GE GT rackmount units, each with an external battery pack for additional runtime. Here is why we opted for this particular unit:
      * Its maximum load is 900 watts, while many competing units were rated at 700 watts
      * 2U rackmount form factor for each unit, as well as each battery pack
      * Dual conversion, as opposed to line interactive
      * Able to daisy chain up to three external battery packs per UPS unit
      * Plenty of outlets on the unit
      * Outstanding price point, especially in comparison to other 1.5 kVa dual conversion units
      The last point was the most important. Although the cost was significantly higher than any line interactive unit, it was less expensive (by at least 20%) than any competing product. For a number of reasons, I will not name the vendor or the price paid for the units.

      The units arrived via freight. Initially, I was rather scared by them; the freight company listed four boxes at a total of 290 lbs. The prospect of manhandling 75 lb. units did not thrill me. When the boxes arrived, I was delighted to discover that they were not nearly as heavy as I had originally expected. Initial inspection of the units showed them to be extremely well constructed. The power cords were, by far, the most heavy duty 15 amp power cords that I have ever seen. Indeed, they were of a higher, heavier duty quality than on the APC SmartUPS 2200 that I have at home. Likewise, the connectors for the external battery packs were extremely heavy duty.

      Installation and set up of the units was a breeze. The only sticking point was the cords for the external battery packs. They were pre-bent into a curve, and extremely resistant to bending. If I wanted to swap the position of the UPS and the battery (putting the battery above the unit, as opposed to under), I would have had an extremely difficult time. I also had some difficulty inserting one of the ends of the battery’s power cord into one plug. A little bit of wiggling cured that, however. The plugs were secured with bolts, but the holes in the power cord’s “ears” was slightly small. I was very happy to discover that both the UPS’s and the batteries were not nearly as heavy as I expected. The handles on the front of the units made it easy to move the units, and picking them up without a helper was not a problem.

      After installation, the units continued to delight. They are whisper quiet, and generate extremely little heat. While this is not a prime concern in a server rack buried in a server room, I believe that a tower version of this unit would be extremely pleasant to have next to a desk. Indeed, the units are quieter and generate less heat than my desktop PC. It was nice to deal with rackmount equipment that did not forgo the creature comforts. Visually, the units are quite attractive. Although the average systems administrator probably could not care less, some people do like a good looking unit. The units are a pleasant silver color, with a sporty red stripe across the front. The control panel is extremely bright and easy to read. The LEDs are spaced far apart, so light from one LED does not appear to light the other LEDs. The control panel has a general “step meter” that displays load while receiving external power, and shows battery charge while on battery. I would have preferred to see dual meters. In my experience, getting an idea of how much load can be reduced by shutting down non-critical systems first to extend run time for more important systems can be important.

      I must admit though to being a bit disappointed in GE’s “JUMPStart” software for the units. [Correction 8/23/2006: The software is named “JUMP Manager”, not “JUMPStart”; I got the name confused with a Sun software package I was recently dealing with.] My first impression of it was less than favorable: it requested an installation path without spaces. It is a Java application that seems to run on both Windows and *Nix. I tested it on a Windows 2003 Enterprise Edition server. After installation, it was not courteous enough to make a group for itself in the Programs menu. It just dumped itself into the root of the menu. It also did not add itself to “Add/Remove Programs.” It also did not add itself as a Windows service. When I did run it manually, it left a nasty Java console window open. The Java console was spitting out errors, but the software itself seemed to be running fine. This software, as far as I am concerned, is not ready for primetime for general UPS monitoring purposes. I would definitely recommend purchasing the add-on SNMP card, and monitoring the unit that way. I will be looking into that myself, and add it to my MOM configuration, along with scripts to shut down servers in the case of a low battery.

      Overall, I think the GE GT unit is a fantastic item. It is available in 1 kVa ? 3 kVa capacities. The only problem that was beyond “one time nuisance” was the lack of good software for the serial connection, pretty much so mandating the purchase of the add-on SNMP card. You should factor this into your purchase price if you decide to buy this unit. The lightweight of the units and the battery packs lets me feel more comfortable that I will not develop a hernia when it comes time to replace the batteries.

      J.Ja

      • #3282491

        Review of the GE GT 1.5 kVa UPS

        by maximhc ·

        In reply to Review of the GE GT 1.5 kVa UPS

        I sure hope this is not a true server room, computer room or data center. Placing a UPS in a rack with batteries without a mean to remotely disconnect them is a code violation. If a fire fighter goes in there with a water hose and those batteries are live he or she is done for and most likely the Company too.

         

        Check with your AHJ to insure you are in the correct environment and not putting lives at stake here

         

        GHH.

         

      • #3282470

        Review of the GE GT 1.5 kVa UPS

        by dons ·

        In reply to Review of the GE GT 1.5 kVa UPS

        Insert comment text here  

        It’s a shame that GE’s service group has imploded and the lack of support may be a future issue for you.  In the world of UPS’, pound for pound, you’ll get exactly what you pay for, in service and capability.  You mentioned that the Powerware unit did not have the power to support your load yet it took two (2) GE units to handle it.  I’m not a proponent of Powerware, just proper use of the right UPS technology in the application it was intended to support.  Powerware has different technologies within their stable of equipment.  There are many new UPS brands on the market with much better spec’s and support than either of these.  They’re just not as well known or marketed as well as the popular brands.  Long term cost of ownership is very important, especially with a new UPS (with an uncertain future).  There’s more to know about the workings of the GE product than meets the eye.  Battery maintenenace is the single biggest factor you’ll deal with over the life of any UPS.  My suggestion is to consider a UPS purchase with the same amount of research as you would your server.  It’s just as important.

        D.S.

         

      • #3282453

        Review of the GE GT 1.5 kVa UPS

        by justin james ·

        In reply to Review of the GE GT 1.5 kVa UPS

        D.S. & GHH –

        Thanks for the feedback.

        “Placing a UPS in a rack with batteries without a mean to remotely disconnect them is a code violation.”

        This is something that I was NOT aware of, and thank you for bringing it to my attention. I am assuming that you are referring to a master kill switch or some other method of turning them off without getting close to them. The units are single phase, not three phase, so they are no more dangerous than a standard wall outlet. However, I do see your point, quickly filling a room with water with those units still out, while the rescue personnel already cut the power from the outside, could be quite dangerous. I will check into that immediately, and thank you for bringing that up.

        “You mentioned that the Powerware unit did not have the power to support your load yet it took two (2) GE units to handle it.”

        The Powerware units we were comparing it to had a 700 watt maximum power rating, and these units had a 900 watt maximum power rating. Unfortunately, thanks to the current environment, we are not able to simply get one large unit; we only have 15 amp outlets and 1.5 kVa units are the maximum size we can use. Sadly, the office I currently work in was never designed for anyone doing more than having maybe 1 dinky econo-server on a “footstool” UPS. While we are working on moving our operations to a more “computer friendly environment,” we need to make do with what we have today, and power protection simply could not wait.

        The office selection was made a few weeks before I began working for the company (I would have rejected it out of hand, as it is not suitable for our needs), but we are currently working on moving to a more realistic situation.

        J.Ja

      • #3282402

        Review of the GE GT 1.5 kVa UPS

        by pixitha ·

        In reply to Review of the GE GT 1.5 kVa UPS

        Im curious to what retailer you went with when purchasing those 2 UPSs?

        Thanks,
        Kyle

      • #3282383

        Review of the GE GT 1.5 kVa UPS

        by justin james ·

        In reply to Review of the GE GT 1.5 kVa UPS

        Kyle –

        There are a number of ties between myself and that particular vendor, so I do not beleive that it would be appropriate for me to name the particular vendor. However, I can tell you that the vendor shows up very high in a Google search for this particular product. It was not a “big name” vendor like CDW, Ingram Micro, etc.

        J.Ja

      • #3284831

        Review of the GE GT 1.5 kVa UPS

        by granville ·

        In reply to Review of the GE GT 1.5 kVa UPS

        To J Ja:  Reading this makes me grateful that Australia(Oz) has 240v current delivered by our mains.  I have a 3kVa UPS running on a 10amp circuit, installed by an electrician and approved by a building engineer.

        GHH:  You make a very interesting point thank you.  I must check the safety requirements for my own site.

        GAHW.

    • #3205771

      Multithreading Tutorial – Part I: Introduction

      by justin james ·

      In reply to Critical Thinking

      This is the first of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      Introduction

      Multithreading is treated as a black art by many programmers, and for good reason. Even for an experienced programmer, knowing when to multithread an application, where to multithread an application, and how to multithread an application can be difficult to determine. Multithreading is definitely not a science, there are no hard and fast rules whatsoever to guide the programmer. In preparation for writing this series, I contacted Rick Brewster at Microsoft (who worked on the excellent Paint.net software) for advice. He quoted Rico Mariani at Microsoft: “measure, measure, measure.” This is sound advice. This series of articles is not going to provide any pat advice, but will offer some suggestions, sample code, and demonstrations of various multithreading techniques as well as showing some performance characteristics, so that you can get an idea of your multithreading options.

      Where We Were, Where We Are

      The idea behind multithreading is not new, it goes back decades. However, up until very recently, with the advent of HyperThreading processors and then multi-core processors did multithreading become as critical to understand when writing applications as it suddenly is. Before these advances in hardware, unless you were writing applications for use of “big iron,” chances were that the system you were coding for would probably not have more than one physical or logical CPU, unless you were writing applications for expensive SMP or parallel processing servers. The Windows world has been almost entirely single processor until recently, while only the high end Macintosh machines had multiple processors. The game has completely changed, particularly with Intel introducing the Core 2 CPUs (codenamed “Conroe”). With this line of CPUs and the guaranteed to follow AMD counterpunch, along with the previous generations of multi-core CPUs, even low end desktop PCs are coming out with multiple logical CPUs.

      On a single core, single CPU computer, all multithreading is performed by the operating system with a trick called “time slicing.” As a basic idea, time slicing essentially multiplexes requests to the CPU, interleaving them and giving a bigger portion of the pipeline to higher priority processes. While the system is effective and works, it is quite inefficient. The operating system needs to understand which threads go with which requests, and there is quite a bit of overhead associated with the technique. With the new systems entering (and already on) the market, the multithreading game has changed completely, allowing the operating system to dynamically allocate different threads to different logical CPUs based upon the threads’ needs. For example, one long running thread can devour all of one core’s resources, while the other core handles short lived, simple to process threads such as user input and mouse clicks. As a result, applications that make maximum use of the new architecture will be more responsive, run more smoothly, and provide a better experience to the user.

      As an interesting sideline to the new architechtural changes, while the number of cores per physical CPU is increasing (Intel and AMD are currently producing dual core systems, and quad core systems are on their way!), the clock speed of the individual cores has remained stagnant and even gone done a bit in some cases. In other words, while the new CPUs can juggle multiple tasks better than before, single tasks pay a price. This makes it more crucial than ever before that applications be written to use multiple threads in the right places in the right way. Most desktop applications tend to not use multithreading, or if they do use multithreading, the main processing occurs on one thread, leaving the main application’s thread free to provide status updates and process user input such as handling a “Cancel” button. We have all seen what a single threaded application looks like: the window goes grey, Task Manager reports that the process is not responding until it suddenly finishes and updates the screen, we see the spinning hourglass, and have no way of stopping it from cranking away without killing the whole process. Even worse, these applications frequently dominate the CPU, causing the entire system to run slow. A properly multithreaded application, particularly one on a multi-core or multi-CPU system does not inflict itself on the user like this.

      To sum it all up, an application that is written to be properly multithreaded, all else being equal, will be preferred by a user over one that runs in single thread. And programmers who understand multithreading and have demonstrable experience in the techniques of multithreading will be in increasingly hot demand as the desktop market shifts to the new CPUs.

      A Quick Primer to Multithreading Concepts

      Multithreading seems to have a language all to itself, and it does! Many of the concepts involved in multithreading simply do not get discussed in traditional procedural or object oriented code, because the problems do not exist. If they are encountered, it is with working with a database, in terms of maintaining data integrity. Here is a brief primer to the ideas in the world of multithreading.

      Physical CPU: An actual CPU or processor that fits into a socket on the motherboard. A computer with only one core per CPU will have the same number of physical and logical CPUs.

      Logical CPU: A separate CPU pipeline. A HyperThreaded processor will appear to have two logical CPUs per core, and a multi-core processor will have a logical CPU for each core per processor. For example, a computer with two dual-core, HyperThreaded CPUs will appear to have eight logical CPUs (2 physical CPUs x 2 cores per CPU x 2 pipelines per core).

      Atomic Operation: An operation in code which must only be executed by one thread at a time, typically to maintain data integrity. For example, the following code block will not work correctly if other threads try updated the variable intSharedVariable, necessitating making this block of code an atomic operation:

      intThreadIterations = 0
      while (intSharedVariable < intMaximumIterations && intThreadIterations <= 5) {
      	# Do some work
      	intSharedVariable++;
      	intThreadIterations++;
      }
      

      Block: A thread (or process) is “blocking” when it is in such a state that all other threads must wait until it is finished to continue their work. Using the example for “Atomic Operation,” any thread trying to access intSharedVariable will block until the atomic operation is completed.

      Lock: A system for marking a restricting access to a particular variable or object to other threads. The other threads will block until the lock is released. Some locks pertain to both reading and writing (any thread accessing the object will block no matter what), and others are write locks, allowing other threads to read the data but will block when trying to change its value. It is extremely rare to find a read lock, allowing data to be written but blocking on read operations.

      Mutex: A locking primitive used to ensure that only one thread at a time has access to a resource.

      Semaphore: A counter that counts how many resources are being used at any given time. When the maximum is reached, calls to decrement the semaphore block until another thread releases the semaphore. Semaphores are used to limit the number of threads performing work or accessing a resource, such as in a thread pool.

      Monitor: A method of controlling and synchronizing access to objects.

      Thread Pool: The .Net thread pool dynamically manages the number of threads and their priority for processes that use it. Any number of threads may be requested to run from the thread pool, but the .Net Framework controls how many are active at any given moment.

      A Brief Aside Regarding Virtual Machines

      Because of the increasing prevalence of virtual machine (both the host/client type and the hypervisor variety), it is unreliable to rely upon a count of the number of physical processors in a machine. Any low level functions are unreliable at best. Instead, you should count on the OS and/or the .Net Framework to determine the number of processors that are apparent to the OS. The best way to do this within .Net is a call to System.Environment.ProcessorCount which returns the number of logical processors visible to the operating system. That is an important distinction, as a hypervisor or virtual machine system may make only one logical or physical processor accessible by the operating system running your application, while more logical or physical CPUs actually exist. As an example, if you have two dual core CPUs installed and the hypervisor only makes one core available to the OS running your application, but you base your thread usage on the number of logical cores at a low level (which would be four, not the one usable by your application and OS), you will be using four times as many threads as you meant to be using.

      The Challenges

      Writing applications that use multithreading are not very difficult, particularly if it is of the type where a single thread for processing runs separately from the main thread, to allow for progress bars and cancel buttons. In that case, the programmer has not strayed very far from having a single threaded application running on a multithreaded operating system. Other systems have processing of data that is in an N-dimensional format that does not require any atomic operations, so dividing the workload by logical CPU and processing them simultaneously is not too difficult; many multimedia applications are written in this manner. For example, a graphics program might divide an image into quarters on a system with four logical CPUs, and use four threads to simultaneously apply a filter to each quarter. Other applications like database or application servers will created and queue a thread for each operation, and perform them as possible in the order they were entered, carefully locking key data. Yet other applications will perform as much processing as possible, while network or file I/O processes on a separate thread, splitting the two processes as early as possible and joining them as late as possible to give the I/O the most possible time to finish on its own. And yet other applications need to perform a good number of atomic operations, and weigh the cost of each context switch and blocking against the benefit of each additional thread.

      By far, the biggest difficulty with writing multithreaded applications is maintaining data integrity. It is tempting to lock as much code as possible within atomic operations or tie objects up with mutex’s and just hope for the best. While that may have a lot of data integrity, the cost of the atomic operations and locking is so high that your application will probably run significantly slower than a single threaded application!

      After the data integrity issue has been solved, performance is the next concern. A profiler such as the one included with Visual Studio 2005 is invaluable for this. By using the profiler on a single threaded piece of code, you can see where the bottlenecks are, and judge whether or not they are big enough to justify the cost of multithreading them. Once the application is multithreaded, the profiler helps you measure how many threads is best for your application.

      Testing is another challenge. Due to the nature of a multithreaded application, stepping through the application is frequently unrealistic. The best way to test these types of applications is to prepare a battery of input/expected output scenarios and test against those. Code review, particular by peers, can pick up critical details that paper planning and even testing might not catch. Repeated tests are quite important, because depending on your threading technique, the threads may not always be processed in the same order, providing different results if there is a problem. Finally, having a wide range of CPU architectures to test on is extremely important if you are relying upon the number of processors to manage a thread pool or number of concurrent threads.

      Building a Multithreaded Application

      The next blog post in this series will work us through building the outline of an application designed to demonstrate various multithreading techniques and their performance characteristics. Stay tuned!

      J.Ja

      • #3283496

        Multithreading Tutorial – Part I: Introduction

        by bork blatt ·

        In reply to Multithreading Tutorial – Part I: Introduction

        * One * hard and fast rule…

        If you’re writing a Windows Service application, you definitely want to separate your processing from the thread that deals with service requests, or you may find that when the service is chewing a lot of cycles that you can’t stop it.

      • #3284263

        Multithreading Tutorial – Part I: Introduction

        by stew2 ·

        In reply to Multithreading Tutorial – Part I: Introduction

        I have to quibble with some of your definitions. You should find these to be better:

        Physical CPU: A hardware device containing one or more processor cores.

        Processor Core: Hardware in a Physical CPU for processing program instructions.

        Logical CPU: A logical device for independently processing instructions. A processor core may have only one logical CPU, but an Intel HyperThreaded processor will contain two logical CPUs per processor core.

        Process: The highest level of separate execution managed by an operating system. Each independent instance of an application is a separate process.

        Thread: An operating system controlled independent sequence of execution within a process. Each process has at least one thread: the main or primary thread. Additional threads are created using facilities provided by the operating system. Each thread is given time to execute by being assigned to a logical CPU or by dividing the attention of a CPU among multiple threads.

        Fiber: An application controlled independent sequence of execution within a process. Fibers are cooperatively scheduled, which means a fiber must yield control so another can be given processing time.

        Atomic Operation: An operation that is guarranteed to complete before another thread or processor can interfere. What is classified as an atomic operation varies among architectures and operating systems. What is atomic on one platform may not be so on another.

        Critical Section: A section of code that is treated logically as an atomic operation by using appropriate locking mechanisms. The locks keep other threads and processors from interfering with the state manipulated by the critical section.

        Lock: A mechanism which can be used to ensure that certain sections of code are not executed, or a shared resource is not accessed, concurrently by independent threads to ensure data integrity. A lock can be owned (acquired) by a specified number
        of threads — usually one — at any one time and prevents
        additional threads from owning it until an existing owner relinquishes
        its ownership.

        Blocking: The state of a thread which prevents it from continuing until some lock is released or action is completed. One thread can block another by acquiring a lock the other thread requires to continue. The other thread blocks trying to acquire the lock until the first thread releases the lock.

        Mutex: A Mutual Exclusion lock; only one thread may own a mutex at a time.

        Semaphore: A lock permitting a configurable maximum number of simultaneous owners.

        Thread Pool: A usually finite set of threads doled out on demand to requesters. Once all threads have been doled out, additional requests block until a thread is returned to the pool. This avoids managing too many threads which can degrade performance.

        Three of the principle difficulties of writing MT apps you failed to mention were deadlocks, races, and priority inversion. The former, you may know, occurs when two threads must acquire two or more locks, but each acquires the locks in a different order. This can allow each thread to acquire less than all of the locks it requires while blocking on the other thread for the remainder. Neither thread can continue, so they are said to deadlock. Race conditions, or races, occur when the result of code is dependent upon which of two threads executes first or when one thread is interrupted relative to another. Priority inversion occurs when a low priority thread holds a lock forcing a high priority thread to block awaiting that lock.

        Another area you missed is lock-free programming. While it is particularly difficult to do right, lock-free programming can be a real boon to time critical processing. Using nothing more than a few lock-free, threadsafe data structures can be enough to make a high performance MT app.

      • #3284243

        Multithreading Tutorial – Part I: Introduction

        by justin james ·

        In reply to Multithreading Tutorial – Part I: Introduction

        Stew –

        Your definitions are indeed much more precise; I was trying to be brief and a bit “dumbed down,” but yours are excellent. You are also correct that I left out many of the challenges, and completely ignored lock-free programming. This particular post (and probably not as well explained as it could have been) is more of lead-in to a series of posts about writing a basic multithreaded application than an all inclusive introduction to the topic as a whole.

        J.Ja

    • #3283383

      Multithreading Tutorial – Part II: The Application Skeleton

      by justin james ·

      In reply to Critical Thinking

      This is the second of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      In the last post, I outlined some of the basics of multithreading. In today?s installment, we will be constructing the outline of an application that will show off some different multithreading tricks, techniques, and their performance characteristics. We will be using a modular design, so that we can replace the test functionality with code from other projects to see what multithreading works best for that code.

      Without going into any details (it is a very simple Windows application), we put together a basic interface that looks like this:

      The only thing in this basic application that is of interest is the status bar. In the status bar, we use a call to System.Environment.ProcessorCount to report the number of logical CPUs (the screenshot was taken on an AMD Sempron system).

      Internally, we are putting the processing logic into an entirely separate class as a Shared function, so that we may easily create a command line version of the application, if desired. This class, ThreadTestUtilities also contains two Enumerations for the test types:

        Public Enum MultiThreadTestType
          SingleThread = 1
          DotNetThreadPool = 2
          OnePerLogicalCPU = 3
          OnePerPhysicalCPU = 4
          Arbitrary = 5
        End Enum

        Public Enum AtomicType
          NonAtomic = 1
          Mutex = 2
          Monitor = 3
          SyncLocking = 4
        End Enum

      Although MultiThreadTestType contains the member OnePerPhysicalCPU, you will note that the screenshot does not show this option. The reason is that relying upon the physical CPU count is quite a bad idea, as explained in the previous post. But we put it into our Enumeration anyways, in case we decide later on to allow testing on that basis.

      The Shared function for actually running the tests looks like this for the moment:

        Public Shared Function RunTest(ByVal TestType As MultiThreadTestType, ByVal AtomicOperations As Boolean, ByVal Iterations As Integer, Optional ByVal ThreadCount As Integer = 0) AsDouble
          Dim DateTimeStartTime As DateTime
          Dim DoubleReturnValue As Double

          DateTimeStartTime = DateTime.Now

          DoubleReturnValue = (DateTime.Now – DateTimeStartTime).TotalMilliseconds

          DateTimeStartTime = Nothing
          Return DoubleReturnValue
        End Function

      The function?s return results are the total number of milliseconds that it takes for the test to run. You will note that it looks like no multithreading is occurring yet. This is on purpose. We want as little multithreading in the application outside of the test itself. We will later constructing different functions for actually running the tests and calling them in a standard procedural way from within RunTest().

      The final step before we can start constructing the tests themselves is to write a baseline test function. Each of the tests will use the exact same function to perform some computations in order to keep a level playing field amongst all tests. Here is our test function:

        Private Function Compute(ByVal InputValue As Double) As Double
          Dim DoubleOutputValue As Double
          Dim DateTimeNow As DateTime
          Dim rndNumberGenerator As System.Random

          DateTimeNow = New DateTime(DateTime.Now.Ticks)
          rndNumberGenerator = New System.Random(DateTimeNow.Hour + DateTimeNow.Minute + DateTimeNow.Millisecond)

          If DateTimeNow.Millisecond > 500 Then
            DoubleOutputValue = System.Math.IEEERemainder(System.Math.Exp(rndNumberGenerator.Next * (InputValue + 5000) * System.Math.E), rndNumberGenerator.Next)
          Else
            DoubleOutputValue = rndNumberGenerator.Next(InputValue, System.Math.Max(Integer.MaxValue, System.Math.Log(System.Math.Pow(System.Math.PI, InputValue))))
          End If

          DateTimeNow = Nothing
          rndNumberGenerator = Nothing

          Return DoubleOutputValue
        End Function

      As you can see, our test function is designed to put the processor through its paces (at least floating point operations) without using much memory at all or to require any disk access. You may want to modify this code to use a lot of memory or to perform disk access or network operations, to meet the conditions that the applications that you are working on will be facing.

      In the next post in the series, we will write our first battery of tests, to show single threaded performance, with no locking mechanism required or needed. While the development machine that I am currently working on has only one logical CPU (AMD Sempron 3200, x64 architecture, single core CPU), we will get to see this code in action on an AMD Athlon 3200+ (x64 architecture, single core CPU), an Intel Pentium 4 2.8 gHz (x386 architecture, single core CPU, one logical CPU), and a dual Xeon 3.0 gHz system (two dual-core, HyperThreaded Xeon CPUs with x64 architecture for a total of eight logical CPUs). These will be our test systems for the duration of this blog series, and should let you see how identical multithreading models perform on very different systems.

      J.Ja

    • #3201443

      Multhreading Tutorial – Part III: Single Threaded Performance

      by justin james ·

      In reply to Critical Thinking

      This is the third installment of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      In my previous post, I created the skeleton of a project that demonstrates multithreaded performance. In this post, we will be filling in the skeleton to dispatch the work to the correct function, and creating a performance baseline using a single thread.

      During the testing for this post, it was determined that the Compute() function outlined in the previous post did not work as expected, so it has been revised slightly. Its concept is the same, but the computation has been tweaked a bit to eliminate overflow errors in the math. So now we have the following Compute() function:

      Private Function Compute(ByVal InputValue As Double) As Double
        Dim DoubleOutputValue As Double
        Dim DateTimeNow As DateTime
        Dim rndNumberGenerator As System.Random

        DateTimeNow = New DateTime(DateTime.Now.Ticks)
        rndNumberGenerator = New System.Random(DateTimeNow.Hour + DateTimeNow.Minute + DateTimeNow.Millisecond)

        If DateTimeNow.Millisecond > 500 Then
          DoubleOutputValue = System.Math.IEEERemainder(System.Math.Exp(rndNumberGenerator.Next * (InputValue + 5000) * System.Math.E), rndNumberGenerator.Next)
        Else
          DoubleOutputValue = rndNumberGenerator.Next(InputValue) / System.Math.Max(Double.MaxValue – 1, System.Math.Log(System.Math.Pow(System.Math.PI, InputValue)))
        End If

        DateTimeNow = Nothing
        rndNumberGenerator = Nothing

        Return DoubleOutputValue
      End Function

      Our single thread run looks like:

      Public Sub SingleThreadComputation(ByVal Iterations As Integer)
        Dim IntegerIterationCounter As Integer

        For IntegerIterationCounter = 1 To Iterations
          Compute(Double.Parse(IntegerIterationCounter))
        Next
      End Sub

      Finally, here are our performance characteristics for 1,000,000 iterations, in milliseconds per test:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      12031.250

      12046.875

      12125.000

      11796.875

      11906.250

      11981.250

      System B

      10937.500

      10718.750

      10734.375

      10718.750

      11000.000

      10821.875

      System C

      11890.320

      11749.699

      11765.323

      12155.938

      11765.323

      11865.321

      System D

      12359.454

      12343.829

      12359.454

      12390.704

      12406.329

      12371.954

      Average

       

       

       

       

       

      11760.100

      System A: AMD Sempron 3200 (1 logical x64 CPU), 1 GB RAM
      System B: AMD Athlon 3200+ (1 logical x64 CPU), 1 GB RAM
      System C: Intel Pentium 4 2.8 gHz (1 logical x86 CPU), 1 GB RAM
      System D: Two Intel Xeon 3.0 gHz (2 dual core, HyperThreaded CPUs providing 8 logical x64 CPUs), 2 GB RAM

      It is extremely important to understand the following information and disclaimers regarding these benchmark figures:

      They are not to be taken as absolute numbers. They are taken on real-world systems with real-world OS installations, not clean benchmark systems. They are not to be used as any concrete measure of relative CPU performance; they simply illustrate the different relative performance characteristics of different multithreading techniques on different numbers of logical CPUs, in order to show how different processors can perform differently with different techniques.

      The performance numbers on the single thread test show some truly fascinating results. System D (the dual Xeon machine) was actually our worst performer on the single threaded test. Although the Xeons seemed to suffer a bit on the single threaded performance, it is expected that they will maintain nearly identical performance when running 8 simultaneous threads non-atomically, while the single core CPUs should suffer penalty for running multithreaded.

      Another mildly interesting item to note is the difference in the clock reports between the AMD systems (A and B) and the Intel systems (C and D). The AMD CPUs rounded milliseconds up to units of 0.025 milliseconds (I showed 3 significant digits in the table, but the program returned 4 significant digits). Since all of the test machines are running the same version of the .Net Framework, this is obviously a difference between the two chipmakers. It is not relevant to the results of this test, but it is an interesting data point to remember for future use, in case it ever comes up.

      Stay tuned for the next post, which will show our first multithreaded test.

      J.Ja

    • #3200904

      Tech People That I Would Like to Meet

      by justin james ·

      In reply to Critical Thinking

      [Update 9/11/2006: corrected Joel Spolsky’s name, and added relevant links]

      There are a few people in the tech world that I would really like to meet. Some of them just seem like they would be cool people to hang out with, some of them would make awesome people to have as a boss or co-worker, and others are just blindingly smart. Here is my list of tech folks that I would like to know, and why.

      Joel Splosky Spolsky [Edited 9/11/2006: Corrected Joel’s Name]

      http://www.joelonsoftware.com/
      This guy is really, really smart. Even better, he understands developers and development better than anyone else, and owns his own company. Every blog post of his, I want to forward to my boss (but usually do not, in case it is thought of as criticism). I really like how he is able to put into words the feelings that I have felt about a lot of things so succinctly.

      Steve Ballmer

      Say what you want about the guy and his business practices, he has an enthusiasm for his job that is rarely seen outside of professional athletics. He treats running Microsoft like he in on the Olympic team. Rarely do you see employees, let alone managers or executives get so excited about stuff (remember the “Monkey Boy” video?). It would be great to have that level of enthusiasm in a manager.

      Jakob Neilsen

      http://www.alertbox.com
      I have been reading his work for well over five years, and I have yet to find something to disagree with him on. Regular readers know how important I think usability is, and he does a better job than anyone else at turning it into a science that is able to work within business realities. “Usability Engineer” is a job title I would love to have, and I would like to get together with him and figure out how to become one.

      Mark Hurst

      http://www.goodexperience.com/
      Mark Hurst is up there with Jakob Neilsen, but he is a lot more focused on the user experience itself, and not the objective study of usability. He is also much like Joel Splosky Spolsky Edited 9/19/2006: corrected name, in that he really “gets” how tech people work, and how to help them work within the confines of business realities. His newsletters are smart and funny, and packed with unique information that you will not find anywhere else.

      John Dvorak

      http://www.pcmag.com/category2/0,1874,3574,00.asp
      I have been reading John Dvorak since I was about 11 years old, or something like that. He is extremely witty, and when he is talking about basic principles, few analysts are smarter. He is rather contrarian, like myself, which I also like. Just because everyone else thinks that something is true does not mean that he will jump on board. Lately I think he has been slipping, but much of that is because he has not been in the trenches for a long time. That still does not diminish his abilities to analyze, and he is still top notch when he has the right information. I also respect that he does not buy into the vendor buzz and media hype, unlike far too many tech writers.

      Daniel C. Dennett

      http://ase.tufts.edu/cogstud/~ddennett.htm
      While Daniel C. Dennett is not a tech industry person per se, his writing and research has a lot to do with technology, and is extremely applicable in technology. He was the first person to write about cognitive science who really made sense to me. Even when he is wrong, he is a good sport about it.

      Bill Gates

      The man is absolutely ruthless, to the point where it takes the entire government to put the brakes on him. Like Ballmer, Gates has an uncontrolled exuberance for his work and his company. While he may get dinged for copying other people?s ideas, he is excellent and combining the ideas of others in ways which they never thought of. I also really like his philanthropy work; I get the impression that he wants to win for the sake of winning, not for the rewards. While that type of predatory attitude is not entirely admirable, it is certainly respected.

      Larry Wall

      http://www.wall.org/~larry/
      The father of Perl. ‘Nuff said.

      Mark Cuban

      http://www.blogmaverick.com/
      Mark Cuban rocks, because he understands what the consumer wants. More specifically, he ignores what the consumer thinks or claims they want, and talks about what they will actually pay money for. That is a huge difference. For example, I know a lot of people who want fancy cars, but prefer to drive economy cars and not pay for a fancy car. Mark Cuban is also a very funny person to read. His whole to-do with Overtsock.com was probably the funniest thing in the tech business world for quite some time.

      James Gosling

      http://blogs.sun.com/jag/
      A geek?s geek, if you know what I mean. He is totally, brutally open and honest to the point where you wonder that his employer (Sun) allows him to appear in public without a muzzle. I remember reading an interview with him a while back, where he actually answered the question of “what are your biggest mistakes” with legitimate answers.

      Who is on your list of people in the Tech World that you would like to meet?

      J.Ja

      • #3282697

        Tech People That I Would Like to Meet

        by tony hopkinson ·

        In reply to Tech People That I Would Like to Meet

        You. LOL

        Niklaus Wirth, (finalised the language Pascal) would have been nice.
        Alan Turing, Johann Von Neuman, Blaise Pascal himself. George Boole, Charles Babbage. The giants on whose shoulders we all stand, to quote another genius.

        Reverend Peter Dodgson, better known as Lewis Carrol,he did some very good work on logic, not a bad author either.

        The guy also unfortunately passed away, who started me in computing Mr AJ Hopkins, not a blood relation,  in my opinion another giant.

        These young whippersnappers you mention are probably interesting people as well.

         

      • #3200543

        Tech People That I Would Like to Meet

        by siryes ·

        In reply to Tech People That I Would Like to Meet

        It’s Joel Spolsky not “Splosky”, I bet you know that.
        Is there any chance that your posting can be corrected?

        http://www.joelonsoftware.com/

      • #3200542

        Tech People That I Would Like to Meet

        by ricardo.silva ·

        In reply to Tech People That I Would Like to Meet

        Maybe you could post relevant links to the sites/blogs related to your choices.
        We all know how to “google” them, but you could speed things up and increase your article’s usability 🙂

      • #3200450

        Tech People That I Would Like to Meet

        by justin james ·

        In reply to Tech People That I Would Like to Meet

        I did indeed spell Joel’s name wrong, thanks for catching that! His last name is one of those items where I have to activiely focus on getting it right, because the original spelling I had is how my mind always remembers his name. I am pretty sure that a historical search of my blogs shows that I consistently spell it wrong (worry Joel!). If it is any consolation, my uncle routinely spells my last name incorrectly, despite it being one of the top 10 most common first names out there. Jeames, Jeams, Jams, Jaemes, Jaems as all ways he has spelled it. 🙂

        Ricardo, excellent point about the links. I should have done that from the get-go. I have updated the blog. Interestingly enough, Steve Ball and Bill Gates are the only ones without blogs or personal Web sites. Instead, they choose to communicate through the Microsoft Executive Email system (http://www.microsoft.com/mscorp/execmail/).

        J.Ja

      • #3204798

        Tech People That I Would Like to Meet

        by alangeek ·

        In reply to Tech People That I Would Like to Meet

        Just one more Spolsky to fix under Mark Hurst.

      • #3203166

        Tech People That I Would Like to Meet

        by mark miller ·

        In reply to Tech People That I Would Like to Meet

        The top of my list right now would be Alan Kay. Maybe Douglas Engelbart as well.

        Both of them thought revolutionary ideas. They probably still do. Kay created the concept model of the personal computer as we know it today, back in the early 1970s when personal computers were just barely getting started, and when most people in the industry thought that a computer was a mainframe or minicomputer, one that would take up part or all of a room. I’m pretty sure he and his fellow researchers invented the GUI that inspired Steve Jobs to move Apple to create the Macintosh. They made it for his “Smalltalk system”, which he wrote at Xerox PARC for the Alto prototype. He created the canonical object-oriented language, Smalltalk, a language I still love to work with whenever I get the chance.

        Kay thinks deeply about technology’s influence on society, and how personal computers have the potential to influence it years down the road. I’ve listened to several of his speeches. He has an amazing intellect.

        Engelbart is widely known in the computer science field as having invented the computer mouse, but he was a pioneer in envisioning how computers could serve other people besides system operators, an interactive computer that would respond directly to the user. He thought time-sharing systems were the future, and in that I think he was mistaken, but he had the germ of a good idea. This is just my opinion, but I think his most forward-thinking concept was collaborative computing. I encourage people to watch the online video of his Menlo Park demo, where he gave a public demonstration of what he and fellow researchers had created, in 1968. It was so far ahead of its time some thought his demo was smoke and mirrors. It was all real. This is reminiscent of the first public demonstrations of Alexander Graham Bell’s telephone system. Back when the phone was invented, some people couldn’t believe that two people could talk to each other over a wire.

        I wrote an article about Engelbart and Kay at my blog. I’ve got quite a few video links to relevant material there.

        Bill Gates

        Despite his billionaire status, he is still a geek (I mean that affectionately). There are people today who question whether he’s a programmer at heart, or if he ever wrote code of any significance. His programming days are probably long gone, but he has the mind for it. Spolsky wrote an article about a code review he did with Gates back when he worked at Microsoft. Gates gets it. I’d love to talk tech with him if I got the chance, though these days he seems far more interested in his philanthropic work.

        Alan Cooper

        He created the technology that became Visual Basic at Microsoft. He didn’t create the “Basic” part (the Basic came from a version that Microsoft already had), just the “Visual” part. I listened to an interview with him a year or two ago, and I got the sense that he understands the human-application interface, and usability very well. It would be neat to listen to him just do a “mind dump”.

        Any of the editors of Compute! Magazine, like Jim Butterfield, John Lock, Richard Mansfield, Charles Brannon, Tom R. Halfhill, etc.

        This magazine went out of publication in 1994, but it had a major influence on me becoming an application developer, growing up. They tried hard to present technology as approachable and fun, and they conveyed the sense that the 1980s were historic times for technology, and that personal computers (back then) were the driver of the revolution.

        Stewart Cheifet

        Cheifet was the host, and probably the creator, of one of my all-time favorite TV shows: The Computer Chronicles. It was cancelled in 2002 unfortunately. When the show first started out it was co-hosted by the legendary Gary Kildall, who was also the President of Digital Research, and the author of CP/M. From the early 1980s onward, this show covered the PC industry. It gave both established and budding companies an opportunity to show that they existed, and to do a little demo of their wares. The show ran once a week, and would always end with “This week’s computer news”. Kildall left as co-host sometime in the early 1990s. He died a few years later.

        The archives for this show are available here.

      • #3203045

        Tech People That I Would Like to Meet

        by justin james ·

        In reply to Tech People That I Would Like to Meet

        Mark –

        Alan Kay, excellent choice! My list was based not just along the lines of “who is really smart,” but “who is interesting.” I think Alan Kay meets that requirement. There are tons of smart people in the IT industry who just seem really boring. Michael Dell, Michael Arrington, and Larry Ellison, for example.

        J.Ja

      • #3204120

        Tech People That I Would Like to Meet

        by mark miller ·

        In reply to Tech People That I Would Like to Meet

        Justin-

        I always saw Michael Dell as more of an interested businessman, someone who liked computers, but didn’t get into them the way you and I do. I’ve never heard of Arrington, so I can’t comment. Ellison is interesting to listen to, but more for entertainment value. He has a kind of flair and panache that’s engaging, in a way. He’s one of the few people in the industry who you could imagine hob-nobbing with the most elite sophisticates, and in that way I always imagine him as a legitimator of the IT industry. He’s someone who says, “IT is classy.” Nevermind if it isn’t true. It’s the image that counts. His views on the industry are almost guaranteed to be irrelevant though. The one “hit” I’ve seen in his predictions that’s come true is the advent of online applications (web applications, basically), and readily available online information.

    • #3201221

      Why is your *Nix Server Running X?

      by justin james ·

      In reply to Critical Thinking

      OK, so a temporary Ubuntu bug disabled X. And this is a problem because? In the years that I have run *Nix servers, I have never once felt the need to install X. The only time I installed X on a server, was because someone else in the organization needed an easy way to follow the progress of some code they were running, and letting them login to X to view a graphical version of top was what they wanted.

      Folks, there is no good reason to run (let alone install!) X on a server, unless it is being used as a server for thin clients. If you have a ton of people using X applications, it is easier to get a bunch of X clients and hook them into a big server.

      Outside of that fairly rare scenario, why is X on your server? There is nothing that needs to be done on a *Nix server than cannot be done via CLI, without the hassle and headache of dealing with X. X is one of the most miserable pieces of software out there to deal with, right behind BIND. So unless you are running an X farm, rip X off of your server.

      J.Ja

      • #3282700

        Why is your *Nix Server Running X?

        by tony hopkinson ·

        In reply to Why is your *Nix Server Running X?

        It’s on there so it can compete with windows.
        LOL
        Actually the first servers I set up I did put X on them and started using it through the GUI. I was a complete newbie though, as my knowledge improved I used it less and less, until it was simply lying about getting dusty. Some of the GUI tools are bloody awful. Mandrake 9.0’s  KCron for instance, what a piece of crap that was. Trying to figure out why my scheduled tasks weren’t running when I wanted them to, particularly user specific ones was the first time I really dipped into the CLI.

      • #3200548

        Why is your *Nix Server Running X?

        by jaqui ·

        In reply to Why is your *Nix Server Running X?

        The only *x server running X I have is one I use for testing, since it’s more a workstation than a server.
        my strictly server boxes are all running linux from scratch with no X installed.

    • #3226794

      Multhreading Tutorial – Part IV: SyncLock Performance

      by justin james ·

      In reply to Critical Thinking

      This is the fourth installment of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      In today’s post, we will take a look at using the SyncLock system to maintain data integrity during operations. We will also be showing off the .Net ThreadPool object as well. Initially, I planned to write my own version of the ThreadPool. However, there is really no reason to do so within the scope of this series. The .Net ThreadPool works great for our needs and is well designed to manage the number of active threads. The .Net ThreadPool is a very smart way of managing multiple threads, and it allows us to set the maximum number of executing threads, which allows us to achieve our goal of seeing what the effects of limiting or expanding the number of running threads has on performance. Writing a thread pool on our own is a rather difficult task, and is not needed unless your project has very specialized needs.

      Here is the code for the SyncLockThreadWorker object that performs the computations and atomic operations:

      Public Class SyncLockThreadWorker
        Public Shared objStorageLock As New Object
        Public Shared objCompletedComputationsLock As New Object
        Public Shared IntegerCompletedComputations As Integer = 0
        Private Shared DoubleStorage As Double

        Public Property Storage() As Double
          Get
            SyncLock
      objStorageLock
              Return DoubleStorage
            End SyncLock
          End Get
          Set
      (ByVal value As Double)
            SyncLock objStorageLock
              DoubleStorage = value
            End SyncLock
          End Set
        End Property

        Public Property CompletedComputations() As Integer
          Get
            Return
      IntegerCompletedComputations
          End Get
          Set
      (ByVal value As Integer)
            IntegerCompletedComputations = value
          End Set
        End Property

        Public Sub ThreadProc(ByVal StateObject As Object)
          Dim ttuComputation As ThreadTestUtilities

          ttuComputation = New ThreadTestUtilities

          Storage = ttuComputation.Compute(CDbl(StateObject))

          SyncLock objCompletedComputationsLock
            CompletedComputations += 1
          End SyncLock

          ttuComputation = Nothing
        End Sub

        Public Sub New()

        End Sub
      End Class

      Inspection of the class shows that we use SyncLock in two places. The first place is in updating the Storage property. The other place is in updating the number of completed threads. Because we do not ever read the Storage property while simultaneously writing to it, placing the lock within the property accessor is acceptable. For the CompletedComputations property, however, we read and write to it in the same line, making it necessary to lock it as we work with it, as opposed to within the accessor.

      This is the code that creates and runs the threads, as well as waits for them all to finish before completing. If we do not wait for all of the threads to be finished, the Sub will exit as soon as the threads are queued.

      Public Sub SyncLockMultiThreadComputation(ByVal Iterations As Integer, Optional ByVal ThreadCount As Integer = 0)
        Dim twSyncLock As SyncLockThreadWorker
        Dim IntegerIterationCounter As Integer
        Dim iOriginalMaxThreads As Integer
        Dim iOriginalMinThreads As Integer
        Dim iOriginalMaxIOThreads As Integer
        Dim iOriginalMinIOThreads As Integer

        twSyncLock = New SyncLockThreadWorker

        Threading.ThreadPool.GetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.GetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        If ThreadCount > 0 Then
          Threading.ThreadPool.SetMaxThreads(ThreadCount, ThreadCount)
          Threading.ThreadPool.SetMinThreads(ThreadCount, ThreadCount)
        End If

        For IntegerIterationCounter = 1 To Iterations
          Threading.ThreadPool.QueueUserWorkItem(AddressOf twSyncLock.ThreadProc, Double.Parse(IntegerIterationCounter))
        Next

        While SyncLockThreadWorker.IntegerCompletedComputations < Iterations

        End While

        Threading.ThreadPool.SetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.SetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        twSyncLock = Nothing
        IntegerIterationCounter = Nothing
      End Sub

      Here are the results of our tests. All tests are for 1,000,000 iterations, and the results are in milliseconds per test run.

      TEST 1

      This test allows the ThreadPool to manage the total number of minimum and maximum threads on its own:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      17250.000

      16562.500

      13984.375

      14875.000

      15531.250

      15640.625

      System B

      16234.375

      20343.750

      16718.750

      16656.250

      13906.250

      16771.875

      System C

      17529.413

      23174.478

      17998.532

      19030.594

      17685.786

      19083.761

      System D

      33734.590

      33609.590

      33281.463

      33297.088

      33328.338

      33450.214

      Average

       

       

       

       

       

      21236.619

      TEST 2

      In this test, we limit the maximum number of threads to one per logical processor:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      13484.375

      12987.500

      14656.250

      13437.500

      14281.250

      13769.375

      System B

      15906.250

      19593.750

      13953.125

      15859.375

      14265.625

      15915.625

      System C

      17799.610

      20312.852

      25457.524

      27521.648

      17748.335

      21767.994

      System D

      33203.337

      33265.837

      30765.821

      31172.074

      33203.337

      32322.081

      Average

       

       

       

       

       

      20943.769

      TEST 3

      This test uses only one thread:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      13125.000

      14093.750

      15390.625

      14046.875

      14875.000

      14306.250

      System B

      13031.250

      11859.375

      13000.000

      14546.875

      12015.625

      12890.625

      System C

      16481.714

      12681.850

      14652.150

      12728.762

      14386.316

      14186.158

      System D

      33343.963

      21953.265

      21687.638

      22609.519

      23625.151

      24643.907

      Average

       

       

       

       

       

      16506.735

      TEST 4

      This test uses two concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      13656.250

      14031.250

      13515.625

      14484.375

      13953.125

      13928.125

      System B

      17656.250

      14453.125

      17093.750

      16828.125

      18656.250

      16937.500

      System C

      21673.297

      22689.722

      30649.108

      20844.520

      33792.205

      25929.770

      System D

      22906.396

      22922.021

      22172.016

      24515.781

      25484.538

      23600.150

      Average

       

       

       

       

       

      20098.886

      TEST 5

      Here we show four concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      27968.750

      24359.375

      24406.250

      21515.625

      22765.625

      24203.125

      System B

      33078.125

      23546.875

      31687.500

      34000.000

      33953.125

      31253.125

      System C

      30649.108

      33510.733

      31462.247

      34011.127

      24097.079

      30746.059

      System D

      25390.787

      25437.662

      25484.538

      25515.788

      25609.538

      25487.663

      Average

       

       

       

       

       

      27922.493

      TEST 6

      This test uses eight concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      25484.375

      26031.250

      26125.000

      25312.500

      25843.750

      25759.375

      System B

      35812.500

      36312.500

      34875.000

      21578.125

      23328.125

      30381.250

      System C

      34402.060

      34730.443

      34152.920

      34726.065

      34556.444

      34513.586

      System D

      31234.574

      31578.327

      31328.325

      31468.951

      31359.575

      31393.950

      Average

       

       

       

       

       

      30512.040

      TEST 7

      Finally, this test runs 16 simultaneous threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      25671.875

      25562.500

      25796.875

      27156.250

      26078.125

      26053.125

      System B

      31750.000

      13171.875

      38953.125

      43703.125

      35500.000

      32615.625

      System C

      33615.818

      33710.465

      33959.402

      33974.959

      34021.628

      33856.454

      System D

      50422.197

      52656.587

      44015.906

      50953.451

      44859.662

      48581.561

      Average

       

       

       

       

       

      35276.691

      System A: AMD Sempron 3200 (1 logical x64 CPU), 1 GB RAM
      System B: AMD Athlon 3200+ (1 logical x64 CPU), 1 GB RAM
      System C: Intel Pentium 4 2.8 gHz (1 logical x86 CPU), 1 GB RAM
      System D: Two Intel Xeon 3.0 gHz (2 dual core, HyperThreaded CPUs providing 8 logical x64 CPUs), 2 GB RAM

      It is extremely important to understand the following information and disclaimers regarding these benchmark figures:

      They are not to be taken as absolute numbers. They are taken on real-world systems with real-world OS installations, not clean benchmark systems. They are not to be used as any concrete measure of relative CPU performance; they simply illustrate the different relative performance characteristics of different multithreading techniques on different numbers of logical CPUs, in order to show how different processors can perform differently with different techniques.

      Compared to our previous test results, it is very easy to see that atomic operations have a very high cost. Additionally, each active thread above and beyond the number of logical processors degrades performance. This is because the OS (I say “the OS” instead of “Windows” because this will apply to any multithreading OS that uses time slices to manage multithreading) has to perform quite a bit of work to manage each thread that runs per processor. It is also easy to see that the ThreadPool object does not make the best possible decisions regarding the number of active threads. While this is not evidence that the ThreadPool object is poorly written, it is evidence that it is wise to override its settings in order to tune performance to the application’s needs, based on the type of operations occurring in the running threads. An application running threads waiting on asynchronous I/O, for example, will be able to sustain many more active threads than one such as this one that is performing raw computations. It would also be interesting to force a number of concurrent threads, instead of allowing ThreadPool to manage it. It is difficult to tell if the relatively static results beyond a few threads are because ThreadPool is deliberately throttling the number of running threads or if the SyncLock is creating a traffic jam that no amount of pooling can resolve. The only way to test this would be to rewrite the test to use a homegrown thread pool, and to possibly have each thread perform a much higher work-to-locking ratio. It is also interesting to see that the dual Xeon system still plods behind the other systems. It may be that with a much higher number of forced concurrent threads that the Xeon truly shines.

      My next post will have extremely similar code, but use the Mutex object instead of SyncLock to manage the atomic operations.

      J.Ja

    • #3229048

      PHP Is Doomed

      by justin james ·

      In reply to Critical Thinking

      It tends to be the case that when there are three or more competitors for something, the race gets narrowed down to two real contenders fairly quickly. In the case of ASP.Net, J2EE, and PHP, I think the odd man out is going to be PHP. “But,” you cry out, “PHP is the language of choice for so many open source Web projects!” So was Perl at one point. It did not save Perl, and it will not save PHP. Why not? PHP does not allow the programmer to multithread their application.

      I have been doing a series of posts about multithreading techniques with VB.Net (next week I will be posting Part V, using Mutex’s for atomic operations). The code is almost directly translatable into C#, and the principles are identical in Java. They will not work in PHP though. PHP, as far as I can tell by searching and browsing its documentation, does not support multithreading. This is going to be a major vulnerability for PHP in the very near future.

      One of the major drivers of the Internet explosion has been the plummeting cost of running a server, thanks in large part to the ability of commodity hardware to handle the load of serving pages over the last few years. Between the LAMP stack, J2EE, and the Microsoft stack, there are plenty of options, and they all work very well on a server that can be had for well under $5,000. Up until recently, however, these commodity servers were using one physical and logical CPU. AMD and Intel have changed the game with their dual core (and soon, quad core) CPUs. SMP motherboards have come down significantly in price as well. The $5,000 number can now get you a server that has two dual core Xeon, Core 2 Duo, or Opteron CPUs. In other words, a modern server is at minimum a two logical CPU machine and is headed towards the four-to-sixteen logical core CPU quite soon (two quad core CPUs with HyperThreading is sixteen logical cores). The Sun T1 CPU has put a 32 thread pipeline into the under-$10,000 price range.

      Furthermore, more and more Internet applications retrieve ? but do not require ? data to be retrieved from elsewhere, possibly across a WAN link. If you are a Web developer, would you prefer your application to continue to process its work (for example, retrieving data from the local database, dynamically creating images, etc.) while waiting on the third party chunk of code (such as a Web service), and only have part of the page show a “Sorry!” message? Or do you think it is better for the entire page to take forever if the third party data source has a problem? Personally, I would prefer to process those WAN requests as a separate thread. But PHP cannot do that, because it does not support multithreading.

      Sure, PHP supports UNIX style process forking. But the documentation has this to say about forking:
      Process Control support in PHP implements the Unix style of process creation, program execution, signal handling and process termination. Process Control should not be enabled within a webserver environment and unexpected results may happen if any Process Control functions are used within a webserver environment.
      In other words, PHP, a language that is not terribly useful outside of Web development language, should not use process forking in a Web server. Huh?

      PHP is weak in many other areas as well. Its documentation is not very good, and frequently the comment threads provide the information that the documentation should have included but did not. It lacks high quality tools such as IDEs and debuggers in a “ready-to-use” state. I will give PHP this: it is easier to install on top of Apache or IIS than any J2EE server that I have encountered. But outside of that, its only strength was being a Perl-like language (allowing Perl developers to transition to it) while being open source (which neither Java nor .Net are) and being adapted to the Web (which Perl was not). With Ruby and Python in the game, PHP is no longer unique. From what I have heard, read, and seen, Ruby on Rails kicks PHP six ways to Sunday.

      PHP is getting hit from all sides, from where I sit. It took RHEL about two years to support PHP5. PHP lacks multithreading support. The .Net Framework and J2EE ecologies are exploding, while PHP5 still feels like a limited knock off of Perl designed to work exclusively with Web pages with a hacky-feeling object model. If you learn Java or a .Net language, you can transition very easily to desktop applications or non-Web server applications; you just need to learn new methods of user interaction. Not so with PHP, it is pretty much so useless outside of a Web server. Meanwhile, Ruby (which does support multithreading) is an excellent general purpose language and the Ruby on Rails framework makes PHP look downright Mickey Mouse in comparison, and it natively supports AJAX (as much as I dislike AJAX, I recognize that it does have its uses, and it is currently quite hot). And Python has been threatening to be “the next big thing” for long enough to believe it may have a chance.

      To put it another way, PHP does not do anything that every other language does not do, and it lacks multithreading support, which is quickly becoming a showstopper. If PHP does not catch up soon, I believe that it is doomed. Ruby and Python are poised to take its crown for the open source Web development platform of choice. Remember, Perl and CGI used to own the Web development space entirely; now Perl barely exists in Web development outside of legacy applications. Things change, and PHP needs to change quickly, or it will follow Perl.

      J.Ja

      • #3228997

        PHP Is Doomed

        by pawelbarut ·

        In reply to PHP Is Doomed

        I wold say that multithreading is not needed in PHP. If we consider PHP as language to ganerate web pages, we realy do not need id. Web application are usualy used by many users simultanously, so processor power can be utilized. Different users are served by differen apache+php processes.
        Paweł.

      • #3228975

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Yes, Apache does handle the multiptle PHP processess just fine for you, without needing it in the language. But like I said in the post, the world is increasingly moving towards things like Web services and other methods of getting information to be used on the page across a WAN link. In those scenarios, multithreading is extremely important. Why hold up all of your local processing while waiting on data from the WAN, when you can get your local processing done and then insert the data retreived remotely at the last minute?

        J.Ja

      • #3227309

        PHP Is Doomed

        by jaqui ·

        In reply to PHP Is Doomed

        Why does php need multithreading? to make websites even more unaccesible to thse with limited or no vision than they already are?

        currently, the proliferation of rich media content in websites is working directly against the core concept of the internet, that being to make information accessible to all. this  “consume web services” model of site design guarantees that those who require assistive technologies, such as a braille terminal, cannot access that website.

        with that in mind, your argument against php becomes an arguement for it.

        next, the ajax  type consumption of web services is a clientside scripted method to duplicate a dead system, the server push cgi tech performed exactly the same service, with exactly the same result, and can be done with php in a *x environment, since php can be used as a shell script or cgi scrpt on any *x box.

      • #3227164

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Jaqui, I hate to say it, but you are complete off base here. Giving PHP developers multithreading, and allowing them to consume Web services on the server side is absolutely critical to avoiding AJAX, client side scripting, and usability issues. I know for a fact that you and I both support the same goals, with only a few small differences in this area of the world. It is infinitely better in my opinion, if you are going to draw data/information from a third party source, to do it server side and put it in a usable, JavaScript-free way in standard, accessable HTML than to hide it behind a million layers of clunky JavaScript, XmlHttpRequest, AJAX code. As such, it is crucial that PHP support multithreading – so that these third party vendors sources may be used in a non-AJAX way, without interfering with the processing of the rest of the page! To argue otherwise makes no sense.

        More to the point, adding multithreading to PHP costs the language nearly nothing, yet opens it up to new uses on a very wide scale. Even if you are not going to pull data from a third party source, building a table, for example, in memory while waiting on a database request and then filling that table with data from a database on the WAN is often (depending on the query, table, etc.) better than waiting for the database results and then creating the table (just an example).

        Usuablly I agree with you here, but I think you misunderstood my point. I like having things on the server as much as possible. Now that multicore CPUs are hitting all but the lowliest servers (I cannot imagine who buys a Celeron server to dish out Web pages though, unless it is a very low volume site…), with a simultaneous decrease in per-core clock, Web developers who are not able to multithread (either due to lack of knowledge or poor choices in language) are going to be unable to make the best use of their servers, and they are going to have a very hard time indeed explaining why keeping the processing on the server is preferable to AJAX.

        J.Ja

      • #3229275

        PHP Is Doomed

        by tony hopkinson ·

        In reply to PHP Is Doomed

        Oh boy. In my opinion scripting is now seriously over used, no doubt threading would be a good addition to PHP. I’m sure we could find applicable uses for the functionality. When people start talking critical performance issues and scripting I have to LMAO though, if performance is critical you shouldn’t be using scripts in the first place. Horses for courses. Bring back the compiler.

      • #3229159

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Tony –

        The point is well taken; that is one thing that Java and .Net have as an advantage. Yes, they are running within managed code environments, but it is still faster than a run time compile + dynamic interpretation. Do not even get me started on dynamic typing, which I will be writing about as soon as I am finished with multithreading.

        That being said, the sad reality is that no one wants to be coding in C++/CGI. Is it lightning fast? It sure is! Scalable? A zillion times better than the usual Web development techniques. Feasible from a business standpoint? Not at all. Yes, the savings in hardware costs alone more than make up for the additional salary of a good C++ coder over a good Web developer. And yes, a great C++ coder can crank out usable, quality code in the same amount of time as a decent Web developer on a feature-per-day basis.

        But the reality is, the number of great (and even good) C++ coders is simply not that high, and most of them are currently occupied working on things such as development tools, OS utilities, and other high level projects. To cope with the explosive demand for programmers, fueled by the significantly reduced cost and difficulty of writing Web applications as opposed to desktop applications, good C++ coders are becoming an increasingly rare commodity as a percentage of the total number of programmers out there.

        While I am on the public record as stating that any big application (Web or otherwise) is actually cheaper to writing in well-written, compiled native code, the truth is, managed code environments and dynamic languages are growing much, much faster in usage with a corresponding growth in available programmers. For whatever reason, even company with a Web app with high scalability needs will probably end up developing in Java, PHP, or .Net, not native code, even though native code will pay for itself once you get beyond the first half rack or so worth of extra servers needed to run the less scalable code.

        Reality being what it is, it is better to improve the systems that are actually being used, than to try to convince folks to start writing native code for Web apps.

        J.Ja

      • #3229334

        PHP Is Doomed

        by kovachevg ·

        In reply to PHP Is Doomed

        A few quick points:

        1. PHP is popular.

        2. PHP is free

        3. PHP runs on everything – choose your OS !!!

        4. PHP has solid object-oriented constructs (Perl does not).

        5. PHP plays well with the stable DBs – MySQL and PostGRE.

        None of the other options combine these features and flexibilities.

        I will stop here because I don’t want to bother you. But, hey, you are welcome to wait for Judgement Day – many have waited … but so far in vain.

      • #3229330

        PHP Is Doomed

        by kovachevg ·

        In reply to PHP Is Doomed

        As to Multithreading, there are two things that play a substantial role:

        1. The cost of the developer – yes, you need someone who really understands threads.

        2. The cost of the hardware – 2 1-U single processor servers are cheaper than one dual-Xeon-prosessor server (1-U or 2-U, regardless). So I would rather minimize my coding time by using PHP and have small cluster of 1-U servers than hire a high-end, high-salaried programmer to do the job and then pay him through the nose to maitain the application. Bacause you know how it is on the business side – requitements change, and so you must constantly up the applications to stay competitive.

      • #3205454

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Kovachevg –

        None of the other options combine these features and flexibilities.

        Ruby does, and I said so in my post. PHP is not unique. Except for the object orieted bit, Perl meets your list, and frankly, PHP’s object oriented-ness is so clumsy, I do not consider PHP to be an object oriented language. It is certainly pretty crummy at model/view/controller development, N-Tier development, and event-driven development. Furthermore, as the world progresses and Web and desktop development get closer and closer (take a look at XAML if you do not beleive me!), developers need for the techiques to be as identical as possible, to allow for better code reuse, simplified testing, and skill usage. PHP is too tied to the Web model to be more than a bad Perl knockoff outside of handling a Web request. It does not support any modern programming methodologies in the slightest. In a nutshell, it is a dinosaur with a pretty bow around its neck.

        1. The cost of the developer – yes, you need someone who really understands threads.

        I agree. And within the next few years, a developer who does not understand threads will be a developer who is not employeed. Threads are approaching the “must have” point, thanks to the fact that nearly every up-to-date PC will be multi-core. Even low end PCs are dual core already.

        2. The cost of the hardware – 2 1-U single processor servers are cheaper than one dual-Xeon-prosessor server (1-U or 2-U, regardless).

        I do not know where you are getting you numbers from, but they are not accurate in the slightest. First of all, the Core 2 Duo (at least on the low end) chips are selling for less than the previous generation of CPUs already, and they are brand new. Second of all, the price differential between a dual-core Xeon server and a single core whatever server is not that great. Third of all, the TCO of 1 dual core Xeon (or Core 2 duo, or whatever your CPU of choice is) is substantially less than the TCO of 2 servers, all else being equal. The price difference in 1 chassis vs. 2 alone is probably 25% of the price difference in CPUs.

        So I would rather minimize my coding time by using PHP and have small cluster of 1-U servers than hire a high-end, high-salaried programmer to do the job and then pay him through the nose to maitain the application. Bacause you know how it is on the business side – requitements change, and so you must constantly up the applications to stay competitive.

        I agree with this, to a point. Yes, throwing hardware at poorly architechted software works, to a point. But if there are scalability problems there are scalability problems. More importantly, you still have not addressed how to best consume data from third party sources in an asyncronous I/O scenario (which is becoming increasingly common) without blocking the whole application. A Pentium II will put out a Web page as quickly as a T1 Niagara if they are both waiting 2 seconds on data across the WAN. Only multithreading can resolve this.

        More to the point, a well written, multithreaded application, depending upon the usage of the threads, does not take significantly longer to developer and is not significantly more difficult to maintain, if the code is well written and the application is correctly designed. In an asyncronous I/O scenario, multithreading adds only a few minutes to a few hours worth of development time per function. That is a small fraction of the total development time, and can show substantial improvements in application response, while making better use of hardware, saving buckets of money.

        I did not say that PHP stinks (which is an opinion I hold, but is outside the scope of this post). I said that it is doomed. Some great technologies and companies are (and have been) doomed. Novell, Borland, Delphi (I know, Borland two times in a row), the internal combustion engine… the list goes on. As you point out, requirements change. And if PHP does not get multithreading support, it simply cannot win, let alone continue to exist.

        J.Ja

      • #3205434

        PHP Is Doomed

        by rootropy ·

        In reply to PHP Is Doomed

        I agree with you. I’m recently migrated to Ruby and Python; so tired of writing thousand lines of code in PHP vs. hundred lines (or less) in Ruby.
        The support of Object Oriented Programming is better in Ruby than in Java, and thousand miles away from PHP 5.

        I’m only disagree when you say that PHP lacks IDE’s and Debugger. Zend Studio is very easy to install and use, and it’s an excellent editor, with PHPDocumentor and CVS (or Subversion) integrated, by example. And it has a debugger, too.

      • #3205080

        PHP Is Doomed

        by james brown ·

        In reply to PHP Is Doomed

        I agree that PHP needs a facelift but I think it is a little too early to call the fight.  There are some serious issues with your PHP-killer alternatives (J2EE and .NET).

        J2EE is hardly what I would call a “stable” environment.  Take a look at any J2EE based web app and you will find it crashing many orders of magnitude more frequently than a similar PHP based application.

        .NET suffers from a lack of platform support.  Remember that companies are looking at TCO and that goes way up when you talk about .NET because of the cost of development environments.

        The best contender, which you barely brush over, is Ruby on Rails.  It is gaining some traction and is most likely to take over the bulk of web development (which is not done by commercial organizations, but by private individuals via open source) if anyone unseats PHP.

        – James

      • #3204920

        PHP Is Doomed

        by debuggist ·

        In reply to PHP Is Doomed

        I’d like to see you go one-on-one with the author of this article. Wow, the article was written three years ago but still seems pretty current.

      • #3204848

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Doug –

        While that article’s points were valid at the time (and they still are factually correct), the conclusion is dead wrong. PHP has only been a real contender when open source vs. closed source is taken into account. Even when that article was written, PHP’s competition had all of those features – and a lot more. And the editors the author mentioned are not open source projects, they cost real money. PHP’s advantage is that it is easy to edit in a plain text editor, and editing .Net or Java code outside a good IDe is possible, but very painful. Where that article is outdated (and quite severely) is on three points:

        1) Development tools – at that time, there were no quality, free tools for .Net or Java. Today, there is Eclipse and Visual Studiop Express.

        2) Competition – at that time, PHP was supplanting Perl in the “open source Web development” space. Your only other option with real support was compiled C/C++ in a CGI environment. Fast and scalable, but not friendly in the slightest. Today, PHP has real competition from legitimate corners. Ruby/Ruby on Rail, Python, and Java is going open source. Compared to Ruby (I cannot speak from personal experience, however, since I have not used Ruby), PHP is a joke. Even worse for PHP, Ruby is currently “sexy” and has built in support for AJAX (via Rails), which makes it much more “sexy.” PHP is not “buzzword compliant,” Ruby, Java, and .Net are.

        3) Suitability for more than one task – that article was written when there was little work being done on systems where portions of the code could and would be used by more than one type of environment. Today, it is almost mandatory that the business logic go into somewhere that can be used by a desktop application, Web application, mobile application, and maybe a few other places. PHP is also worthless in an N-tiered system, outside of the presentation layer.

        There is a really good reason that so few complex Web applications get written in PHP. Sure, there may be some high visibility ones, or some that are huge but not complex. But as someone who has tangled with PHP plenty of times, I can tell you, it is just waiting for real competition, and it is gone. I have not met many people who are “real programmers” (as opposed to “Web masters” or “graphics designers who also write some code” or “script kiddies” or whatever) who have used PHP and think that it is a very good system.

        I may add, PHP is totally stalled in terms of features and capabilities. It seems to be going nowhere fast, and much of the same lethargy was seen in Perl before its sun set.

        J.Ja

      • #3203271

        PHP Is Doomed

        by jdkarns ·

        In reply to PHP Is Doomed

        What I would like to ask is…Where is ColdFusion in all of this?  I didn’t even hear it mentioned.

      • #3203187

        PHP Is Doomed

        by kovachevg ·

        In reply to PHP Is Doomed

        — Previous —

        1. The cost of the developer – yes, you need someone who really understands threads.

        I agree. And within the next few years, a developer who does not understand threads will be a developer who is not employeed. Threads are approaching the “must have” point, thanks to the fact that nearly every up-to-date PC will be multi-core. Even low end PCs are dual core already.

        — END Previous —

        Boy, if only you were right !!! The world is full of developers who don’t understand and who who understand but do NOT use threads. Your prediction is quite apolcalyptic, but unfortunately, it will NOT come true because if it does, 90% of the companies will run out of programmers. I wish every developper had your enthusiasm and dedication but in reality things are quite different.

        Outsourcing is alive and kicking. Do you really think those Indian kids coming out of college can program effectively with thread? I spoke to Dr. Chand recently – he is an Indian born in the US but completed a second graduate degree in India. He showed me very reliable data on new programmers. There are currently 300 million young Indians who are about to start their career as programmers. The strategy of the Indian software comapnies is to keep development prices low and to hire new people every few months. Average turnover for developpers is over 30%. They know that once people get some skill, they are out the door looking for more money. Do you think that with this kind of scenario programming teams will bother to use threads. To cultivate the skills you are talking about requires commitment to quality – I don’t see it happening.

        In the US, companies look more and more for cheap developers who will come for a project or two and then leave. Yes, these could be consultants but just the very mentioning of the word materializes a hfty price tag. Then you need someon for cheap application support. Is that the guy who programmed with threads for 5 years?

        Bottom line is, there are many jobs in IT that do NOT require the skills, the lack of which you claim will put people out of a job.

        In one of your resposes you were discussing other languages, like Ruby on rails, that have all the perks that PHP has but they you said “Except for …” – you see what I am saying? The value of a particular technology is vested not only in what it can achieve by itself but also in how well it integrates with other technologies. I outlined most of the important strengths of PHP in that respect, and so I don’t want to repeat myself.

        I see that you have the zeal not only to achieve but also to promote excellence. It is lofty endeavor to be sure. But remember that there are always multuiple points of view to be considered and that the most logical silution reconciles them without heavily favoring any one of them. Have a look at methods to estimate TCO (Total Cost of Ownership). You will see that there is a lot more to consider than just the engineering apsect.

      • #3203095

        PHP Is Doomed

        by sjsubbi ·

        In reply to PHP Is Doomed

        Before reaching any conclusions, we must first think how PHP became popular in the first place. Before PHP, Perl was the only easy alternative for ASP and J2EE (the giants at that time) and thus gained a good following in a short time. But Perl still had its own disadvantages regarding memory management and platform interoperability (hard to integrate with apache or iis). So it was not out-of-the-box.

        PHP is in all ways Perl-inspired, except that it made many things easier and short-handed than Perl. So many web developers did shift to Perl because of the almost zero learning curve. Built ground-up for only web programming, there is no need for multi-threading API. The PHP compiler does that, and so does Apache. Multi-threading API is something required for desktop apps and system programs.

        PHP is popular and will continue to be popular as long as it remains a lightweight solution. Make it more bigger and tighter – like models, views, controllers, services (like J2EE) – and it’s gonna fade out. I’m now doing a website with PHP, and it took me only 2 days to shift from JSP. All the way, I agree that PHP has it’s own pits, but we can hope for some speed improvements in forthcoming releases.

      • #3203058

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Sjsubbi –

        “Built ground-up for only web programming, there is no need for multi-threading API. The PHP compiler does that, and so does Apache. Multi-threading API is something required for desktop apps and system programs.”

        While I agree, for a small, lightweight application, PHP is an OK choice, I disagree with your statement quoted above. You are essentially implying a view of Web development where the only thing a dynamic Web page does is slap some miscellaneous elements on the screen, and maybe send an email or run a small database query. While that may be exactly what many Web applications do, including some very high profile Web applications, for a real, serious application, a lot more is needed. For that, multithreading is a must.

        For example, an application that draws data from a third party source, or performs a lengthy operation but also wants to keep the user informed as to the progress of the operation needs multithreading. In a nutshell, in a world in which everyone wnats to move desktop application functionality online (a world view that I do not agree with, BTW), desktop processing power needs to be moved to the server as well. That means multithreading. You will never write a good graphics editor in PHP, for example, without a third party library. On the other hand, something written in Java, .Net, Ruby, or C/C++ with CGI can do these kinds of complex operations and provide a great user experience at the same time.

        For little apps (of the “it took me 2 days to write it” variety), PHP is fine. But for a large project, you need something that delivers a bit more oomph. PHP simply does not have that level of juice in it. And for most programmers, learning Language A for small applications and Language B for big ones just does not make sense, when Language B can do small projects too, and just as well as Language A.

        J.Ja

      • #3226596

        PHP Is Doomed

        by jaqui ·

        In reply to PHP Is Doomed

        Justin,

        the issue isn’t in the multithreading to get the data, it’s the multithreading to push the data out after the page has been sent for a dynamic update of the content, that is where multithreading is being used, and that is where multithreading is breaking accessability.

        anyone using a braille terminal will find the dynamically updated page un usable because of the updating.

        they get one line of content at a time displayed, when they move on to later content they lose the earlier content.
        changing content with dynamic feeds ruins thier equipement’s ability to get the data, cache it and display it.
        [ajax anyone? dynamic feeds of external content. ]

      • #3226587

        PHP Is Doomed

        by jenueheightz ·

        In reply to PHP Is Doomed

        Did u got payed to say that ? PHP did not proposed threading because the core of php is quite simple and extremely scalable. Ask yourself why there is no J2EE free webhosting ? Because J2EE or ASP simply do not make it. Therefore, threading is not quite a magic function to boost performance because usage have proved that parallelizing or chaining requests have similar gain. Also parallelizing need much effort in software and hardware architecture. Therefore PHP6 may include threading. Do not try to fool us, thanks.

      • #3226532

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        spamneeded –

        “Did u got payed to say that ? PHP did not proposed threading because the core of php is quite simple and extremely scalable. Ask yourself why there is no J2EE free webhosting ? Because J2EE or ASP simply do not make it. Therefore, threading is not quite a magic function to boost performance because usage have proved that parallelizing or chaining requests have similar gain. Also parallelizing need much effort in software and hardware architecture. Therefore PHP6 may include threading. Do not try to fool us, thanks.”

        First, thank you. I think that is the first time anyone on this site has even suggested a lack of integrity on my part, or playing the FUD or shill game. I will be sure to mark it on my calendar. Who would pay me to knock PHP, especially chainwhen I offer not just Java, or .Net, but Java, .Net, and Ruby and Python as an example? What, you think there is some evil cabal of anti-PHP folks out there?

        Furthermore, you bring up the mention of free Web hosts. Let’s look at reality here. First of all, no one writing anything on a level above “check out my super kewl Web site and sign my guest book!” is using a free Web host. Second, free Web hosts typically do not come with any kind of dynamic language support. Third, if they do, you will not find any using J2EE because J2EE servers are typically complex to administer, and Windows/IIS costs money, both of which do not do well in a free environment. Fourth of all, your statement that “parallelizing [sic] or chaining requests have similar gain [sic]” is absolutely rediculous, not pertinent, and shows a complete lack of understanding. Another term for multithreading is parallel processing. More to the point, no, multithreading does not make any one request or function go faster (in fact, it makes it go slower). However (if you read my post instead of working yourself into a rage you would understand this), it is better to start a thread when requesting data from an external or slow source, assemble the rest of the page in the main thread, and then insert the data from the external source, as opposed to blocking the main thread waiting for that data. Multithreading almost always shows significant performance gains in these scenarios, particularly in instances where the main thread is computation heavy and the children threads are waiting on I/O operations.

        Scalability? PHP is actually not as scalable as you would think; it has a reputation for being a touch slow. In fact, I have never heard of anyone selling a “.Net Accelerator” or a “Java Accelerator” but Zend (as well as other companies) sells a “PHP Accelerator”. In fact, I cannot think of any other language off hand for which an extra “accelertor” even exists, let alone is marketable.

        And PHP6? And you accuse me of FUD? If you have been around this industry more than a year or two, you should have learned by now that “the future” according to vendors, OSS developers, and everyone else is pretty much so irrelevant, particularly when it comes to feature sets and shipping times. Perl 6 is about 5 years late, by my reconing. Windows Vista is years late and missing half of its supposed feature set. Yes, PHP 6 may indeed include multithreading capabilities. Again, if you read what I wrote instead of throwing a temper tantrum, you qould have read the conditions that I put into my post: “If PHP does not catch up soon, I believe that it is doomed.” It has a chance. It has to get moving, and fast, but it does have a chance to save itself. Perl did not disappear over night, and there is still plenty of it floating around. People are still wishing that COBOL would finally die. PHP can survive, if it adapts and changes to meet the current and future needs of developers.

        J.Ja

      • #3226529

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        Jaqui –

        Don’t worry, I am almost as much against AJAX and those other crackpot ideas as you are. 🙂

        J.Ja

      • #3226425

        PHP Is Doomed

        by mark miller ·

        In reply to PHP Is Doomed

        I haven’t programmed in PHP, but I?looked at some sample PHP code recently and I didn’t like what I saw. It looks too much like *nix shell script. I haven’t used Perl, but I’ll take Justin’s word for it that they’re similar. All the same I don’t see why multithreading could not be added to PHP. A thought I had is maybe it’s being held back by some older platforms it supports. Are there versions for DOS, Windows 3.x, or 16-bit OSes? If not, there’s really nothing holding it back except for the possibility that its users haven’t seen a need for multithreading yet. I assume all of the platforms it supports support multithreading natively. So all that would really be necessary is to add the language syntax/semantics that support it to PHP.

        I’ve taken a look at Ruby, and I like what I see, with a caveat. Ruby, I’ve discovered, is similar to an old favorite language of mine, Smalltalk. As Justin has pointed out it’s a dynamically typed language, and it’s object-oriented. Back when I was getting my CS degree, the consensus seemed to be that part of the definition of an object-oriented language was it should be dynamically typed. Languages like C++ were the exception, not the rule. Of course there were always arguments going on about what truly defines an OO language. Back then it was common for Lisp to be referred to as an OO language. When I look back on that I just laugh. With the exception of CLOS (Common Lisp Object System), I don’t see how people could come to that conclusion now. But I digress.

        I’m going to get out ahead of Justin here, I’m sure. I was recently reviewing some old Smalltalk code I wrote, and this caveat came to mind: In a dynamically typed OO language, it’s imperative that you document your instance variables. You can either do this by naming your instance variable according to the type(s) that are assigned to it, or put a detailed comment with it. The reason being that it’s very easy to forget what type was assigned to a variable. In a dynamically typed language, there are no type designators in variable declarations. There are only variable names. However, once a variable is assigned an object of any type, any references to that variable must conform to the rules of that object’s type. If you do not document this in a way that is clear to you, you can get yourself in a real bind, especially if the program is large and complex. As I was reviewing my Smalltalk code I noticed myself struggling a bit to figure out what types the instance variables in my classes had, because I didn’t document them. This was important, because it provided a context for the code that used those variables.

        One might ask why a language should be dynamically typed at all if it creates problems like this. The reason that dynamic typing can be nice is it enables code that is expressive. You can express your intent, rather than focusing on how to implement your intentions. The problem is that sometimes explicitly knowing how your intent is implemented is a good thing. Just from my short exercise in reviewing some old code, I found this was particularly the case with instance variables.

        An example I’ve given in the past is in Ruby it’s possible to say the following:

        today – 2.weeks

        to get a date?from 2 weeks ago. I think this illustrates well what I’m talking about. One clearly gets the sense of what I’m trying to do, almost at first glance. One can almost imagine writing this down on a piece of paper, showing this to somebody who knows nothing about programming, and have them understand what it means.

        In C# I would have to say this:

        DateTime.Today() – TimeSpan.FromDays(14.0);

        In order to compute the date from 2 weeks ago, I have to show the types that will a) give me today’s date, and b) give me a time span of 2 weeks. I’m using the .Net 1.1 API here. TimeSpan doesn’t have a method FromWeeks().

        From what I’ve read, Microsoft is planning on adding capabilities similar to what Ruby has to .Net in the Orcas timeframe. You can see the progress they’re making in the Linq project. They’re adding dynamic typing, lambda expressions (basically anonymous method delegates with parameters), and “extension methods” to C# and VB.Net. Extension methods will provide something similar to what we now see in Ruby. Methods can be “tacked on” to all classes that are used within a certain scope. So one can imagine a call like:

        DateTime.Today() – 2 weeks;

        with “weeks” being an extension method for the integer object “2”, which produces a TimeSpan object. Note I did not use a period?(‘.’) in the call to “weeks”, and that I did not use parentheses. It looks like with extension methods these syntactic elements are not necessary.?Microsoft is getting the importance of expressiveness, particularly when it comes to extracting data from different sources. Programmers are complaining that it takes too much code to do this now. In the case of Linq, relief can’t come soon enough. I would’ve loved to have had it a year ago!

      • #3204501

        PHP Is Doomed

        by fulgerthebest ·

        In reply to PHP Is Doomed

        All this posts amounts to is wishful thinking. “Gee, wouldn’t it be great if we could kick those Freedom-of-technology bastards off the Internet forever? If only PHP would go away, that would leave the only two server-side languages proprietary!” But PHP came along to address a need, and the need is still there. Perl still draws breath as well; you should really drop by Netcraft some time in your life before blowing off your big bazoo about what the web’s doing.

      • #3204488

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        bugmenot –

        “All this posts amounts to is wishful thinking. “Gee, wouldn’t it be great if we could kick those Freedom-of-technology bastards off the Internet forever? If only PHP would go away, that would leave the only two server-side languages proprietary!” But PHP came along to address a need, and the need is still there. Perl still draws breath as well; you should really drop by Netcraft some time in your life before blowing off your big bazoo about what the web’s doing.”

        Not true in the slightest. I said that PHP is falling woefully behind, and a lack of multithreading support is going to kill it. Furthermore, I am certainly not someone who hates those “Freedom-of-technology” people. I certainly offered Ruby as a logical sucessor to PHP.

        Will PHP live on? Sure. Just probably not as one of the Big 3. Does Perl live on? Sure, it is still #4. But its percentage of new development is miniscule. PHP is headed there as well, unless it starts to change.

        J.Ja

      • #3205419

        PHP Is Doomed

        by seefags ·

        In reply to PHP Is Doomed

        First, Perl isn’t dead. Second, all the scripting languages are equally as good as eachother. To say anything else is just naivete or an attempt to start a religious war. We are talking about business apps here. Javscript and the web are allowing creativity to flourish by letting developers design their own widgets and more intuitive user interfaces. There will be no “winner”. For the next 2 years, it will be .NET and Java and scripting languages. In the end, it will be and just be .Net. and we will live on planet Microsoft.

         

         

      • #3205412

        PHP Is Doomed

        by tony hopkinson ·

        In reply to PHP Is Doomed

        That’s two now Mr McCarthy, you are getting a rep
        LOL

      • #3205288

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        “First, Perl isn’t dead. Second, all the scripting languages are equally as good as eachother. To say anything else is just naivete or an attempt to start a religious war. We are talking about business apps here. Javscript and the web are allowing creativity to flourish by letting developers design their own widgets and more intuitive user interfaces. There will be no “winner”. For the next 2 years, it will be .NET and Java and scripting languages. In the end, it will be and just be .Net. and we will live on planet Microsoft.”

        Yes, Perl is quite dead (or at the very least, gasping for breath with a gut shot) as far as Web development goes. Not many people are performing new development in it. To say that all scripting languages are created equal shows a definite lack of understanding. JavaScript is pretty crippled as a language, as an example. PHP lacks multithreading, while Perl, Ruby, and a host of others do support it. Lisp, Scheme, and various ML derivitives can do things that other scripting languages only dream of. And so on. Is it a “religious war”? Sure.

        JavaScript and the other languages fulfill wholly different roles. JavaScript (with the except of three people who took up Microsoft’s challenge to use it in ASP) is client-side only (indeed, with the exception of Flash, it is the only client-side action you can count on being installed and turned on the bulk of the time), while Perl, PHP, Ruby, et al are server-side technologies. There is a world of difference there. I really do not care what widgets people make with JavaScript, because without something server side to feed those widgets, all they are are hopelessly obnoxious UI gagets that add little real value.

        J.Ja

      • #3205191

        PHP Is Doomed

        by gdeckler ·

        In reply to PHP Is Doomed

        Look, this article is clearly ridiculous in that first, the author does not understand PHP very well and is looking at it from the eyes of a traditional application server. Second, the logical excercise of “PHP is the odd man out so it is going to die” is also demonstrably off-base.

        Second one first. If you have three competitors and two are architected nearly identically and the third is not, then it is LIKELY that one of the two similar products will die off because those two compete more directly.

        Now, the first one. PHP is “multi-threaded” simply through its architecture. Multi-threading is important in the Java and .NET worlds because you have a few big application servers sitting behind the scenes doing everything. PHP architecture is not that. In PHP you toss lots of cheap application servers up front and load balance across them. This means that you are spending about $1,000 per processor for a PHP system and about $10,000 per processor for a Java or .NET system. Java and .NET are not 10 times faster or more efficient than PHP. Maybe you could argue twice or three times but I doubt even that.

        Finally, not everyone needs websites that will handle 10 million simultaneous users. It’s a matter of scale versus complexity and PHP is dirt simple compared with .NET and Java. The barrier to entry is effectively zero.

        My money is that Java is dead. .NET has some serious advantages over Java not to mention bringing a huge community of Windows programmers to the web. In addition, with Sun introducing JEE5, and essentially killing J2EE many people will never make the transition which means that J2EE will die slowing and JEE5 will never gain momentum and .NET will crush it like a bug.

      • #3205115

        PHP Is Doomed

        by callred ·

        In reply to PHP Is Doomed

        I dont think lack of multithreading support alone will kill php; that seems a little far-fetched to me. I think PHP may/will probably be replaced at some point, but there will have to be several, really significant factors for that to happen (such as better libraries, faster development, etc); people arent all going to start simultaneously writing multi threaded apps and create this huge multi-threaded language demand just because the duo processors are available, no more than they will all immediately write 64 bit apps because those processors are available. Hence, there will not be any immediate push amongst developers for multi-thread support or bust, and php is safe in this respect.

        I would actually be more likely to believe that PHP is going to be uprooted from an ease-of-development standpoint, which it sounds like Ruby is doing quite nicely already. PHP is probably going to be knocked off not by Java or .NET, but by another open source scripting based language in all liklihood (which it sounds like you agree with); just like PHP knocked off perl before it.

        I think you’re right that PHP will die, maybe even soon, but your reasoning is slightly unrealistic in my opinion.

      • #3203646

        PHP Is Doomed

        by madestroitsolutions ·

        In reply to PHP Is Doomed

        Hi guys,

        I didn’t have time to read all the comments in this blog, but here are my comments:

        In my opinion, you can argue anything you want, but the fact remains, we live in a multithreaded world. If PHP does not support it, then it IS doomed. It may not die, but it will certainly cease to be one of the top players. Threading is a reality of modern computing and you will need it at some point. In fact, even suggesting that you don’t need threads reflects your low degree of experience and knowledge about the subject (no offense to anyone).

        I agree with some other user up there, scripting is SERIOUSLY overused nowadays. After all, AJAX and other techniques are nothing but workarounds for our stateless web environments. In my opinion big companies should be spending time working on a solution rather than building patches for this now obsolete architecture. Granted, that was not the original intent of web browsers but given its popularity and use, shouldn’t we do something about it?

      • #3203436

        PHP Is Doomed

        by lphuberdeau ·

        In reply to PHP Is Doomed

        This article does demonstrate a lack of understanding of the PHP architecture. Especially since the main argument is wrong. PHP’s request handling is done via Apache, so no threading is required to produce web services. Apache will use the available CPUs without a problem.

        On the other hand, for consuming web services, PHP can use parallel request handling. Surprise! PHP has a huge library of extensions. It happens to contain one library to do this. You should perform a better research next time.

        http://pecl.php.net/package/pecl_http/download/1.3.0/

        As for the documentation argument, I think most PHP developers would not agree with you. One of the primary reasons of PHP’s success is the fact that it has an excellent and easy to use documentation. Some extensions are not covered in full detail, but most of them are.

      • #3203404

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        “This article does demonstrate a lack of understanding of the PHP architecture. Especially since the main argument is wrong. PHP’s request handling is done via Apache, so no threading is required to produce web services. Apache will use the available CPUs without a problem.”

        I never said anything about producing Web services. I discussed parallel processing in general. It is amazing to me how many commenters seem to not know the difference between Apache using multiple threads to process requests for PHP scripts, and using threading within your script. Absolutely astounding, really. the fact that Apache runs each request in a separate thread has 1) nothing to do with PHP and everything to do with Apache and 2) absolutely no benefit if what you need is to perform parallel processing.

        “On the other hand, for consuming web services, PHP can use parallel request handling. Surprise! PHP has a huge library of extensions. It happens to contain one library to do this. You should perform a better research next time.”

        You know what? If going to the PHP Web site, and searching for “thread” “mutex” “atomic” and “semaphore” (all common terms within the multithreading world) generate zero useful information, then as far as I am concerned, it does not exist. I did research. I literally spent hours trying to find information on this topic, and could not.

        And the extension you linked to? Yes, while useful for performing the download of data via HTTP to consume a Web request (one example that I had used, so you have addressed that), it still does nothing for parallel computation. Furthermore, the “parallel requests” that you mention seem to execute the requests at the same time, but the rest of the code still blocks! The best you can do is make a new class, inheriting from HttpRequestPool, and override its socketPerform() and socketSelect() functions to have them run code while the request is processing; I hard consider this elegant. At best, to have any kind of true parallel processing, you would need to have the main script have the code split out into multiple files (or have a switch in the GET/POST data) and perform a request via the HttpRequestPool to each of those separate scripts (or the main script with a GET/POST switch somewhere) to simulate parallel processing.

        Even then, you still do not have true multithreading! Where are your shared variables so that you can pass data around? What, you’re going to dump evertything into session()? session() is global to that session, hardly a good way to maintain concurrency.

        And let’s not overlook the fact that this technique involves having Apache spool up an *entire HTTP request and instance of PHP and interpret an entire new script* simply to do some multithreading? That is entirely rediculous.

        And it is hardly portable, since you need to have all of these seprate files for each “function.”

        I could go on and on, but to claim that this package refutes my claim that PHP cannot perform multithreading is entirely untrue. Yes, I may not have found this package in my research to this article (you can blame PHP’s lousy documentation and search system for that one), but this package hardly does anything that I qould consider “mutltihreading.” Sorry, but you are way off base here.

        J.Ja

      • #3203988

        PHP Is Doomed

        by lphuberdeau ·

        In reply to PHP Is Doomed

        Does the web really need complete parallel processing? Is the additional complexity really worth it?

        Seriously, very large applications run on PHP. Applications that get millions of hits a day (think Yahoo!, Flickr). The applications are fast and highly personalized. They don’t use parallel processing. They don’t need it. It all depends on how you plan your whole architecture. If you want to rely on web services to pull out every single piece of information you display, PHP is obviously not for you. If you try using PHP the same way you use Java or .NET, you will fail. The same thing happens if you try to use Java the same way you use PHP. They are two completely different technologies.

        Why would you search for mutex anyway? A mutex is a solution to a problem. PHP has no mutex extension because it does not need any.

        Wait… didn’t the title of the first page contain the word Semaphore? It’s even the first link I get when I search for it in the search box. The link is straight in the table of contents. Even google gives some results. Yeah, it’s completely impossible to find information about semaphores in PHP, you’re right.

      • #3203876

        PHP Is Doomed

        by justin james ·

        In reply to PHP Is Doomed

        lphuberdeau –

        You are correct, I was able to find information on semaphores in the PHP documentation… only after changing the search parameters to include “all documentation” and not just the function list, which is the default. I stand corrected.

        However, I beleive you did not fully read the Semaphor information; it is for working with System V processes. As the documentation I quote in the post itself shows, PHP running in a Web server should not be using forks and other System V process tools! So you really have not shown anything here; PHP is still incapable of running MT in a Web server.

        “Seriously, very large applications run on PHP. Applications that get millions of hits a day (think Yahoo!, Flickr). The applications are fast and highly personalized. They don’t use parallel processing. They don’t need it. It all depends on how you plan your whole architecture. If you want to rely on web services to pull out every single piece of information you display, PHP is obviously not for you. If you try using PHP the same way you use Java or .NET, you will fail. The same thing happens if you try to use Java the same way you use PHP. They are two completely different technologies.”

        You will note that I never said a thing about PHP’s speed or scalability, because I am well aware that PHP is used on some pretty large sites. I also did not say that every site out there requires parallel processing. But if I have to choose between two languages, one with MT support and the other without, all else being equal I will choose the language with MT capabilities. And yes, you really can use Java and .Net for the same types of sites (lightweight pages with relatively little business logic involved) and get results just as good as you can with PHP. Personally, I think Java and .Net both have their problems. Java is a monolithic pig and .Net is Windows/IIS only for all intents and purposes (not to mention the issues with anything provided by Microsoft).

        That being said, PHP offers absolutely nothing unique. More importantly, while PHP in and of iteself does not provide any features that cannot be found in other frameworks, other frameworks, including the upstart Ruby on Rails, have a ton of features that PHP does not, without being significantly more complex or difficult to use. Pound for pound, the reason why PHP has such a large install base and usage rate has more to do with business reasons than technical reasons. It is open source, can be easily edited with a plain text editor, is easier to learn and work with than its predecessor (Perl), is easy to install and configure, and so on.

        It is not like I am saying that PHP is without its merits. It has merits. It is simply that none of them are technical merits. The same can be said of JavaScript. JavaScript is a dog of a language, but it has a huge install base, and as such it gets used. PHP is lacking the high end features needed for big time development work. For example, the way it limits memory yet does not throw off errors when you have exceeded the maximum memory usage in a process; for those of us who do work with multiple MB and even GB with of data at a time, PHP does not cut it. As the world moves to replace traditional desktop applications with Web based equivalents, those big time features are going to be needed. You may not see the need for MT today, but what if someone asks you to write a Web based image editor (just an example). MT is incredibly useful in image editing.

        Personally, I disagree with the drive to replace desktop apps with Web based versions of them. But as that is the direction that things are moving in, MT is important. More to the point, what is your resistence to bringing PHP to the world of modern languages? There are tons of language features out there that few people use, but their existence is a make or break item for some projects. For example, Perl has a lot of functional programming-esque features in it that rarely get used. But when they do get used, they slash code size and execution speed dramatically (try comparing the code to write the Knuth soundex algoritm in the Perl CPAN module to the code needed to do it in any procedural language).

        “Why would you search for mutex anyway? A mutex is a solution to a problem. PHP has no mutex extension because it does not need any.”

        If you do not understand why someone would want to use or need a mutex, then you may want to review multithreading a bit more. Mutex’s do indeed have their place.

        It seems to me that you have a deep emotional investment in PHP. That’s fine, and there is nothing wrong with that. But to hold PHP up as an unassailable totem of perfection when it really is not is foolish at best. I stand by my analysis; PHP’s lack of MT capabilities will push it from one side, while another, better, more modern OSS language on the other side (quite possibly Ruby on Rails) will squeeze it out from the other side.

        J.Ja

      • #3203737

        PHP Is Doomed

        by dmuth ·

        In reply to PHP Is Doomed

        You make it sound as though multi-threading is the “one true way” to write programs, and that just isn’t so.  Having done multi-threading programming in years past, I can say that it is a great way to confuse the programmer and make for programs that are VERY hard to debug.  I’ve been writing in PHP for 8 years, and I’m doing just fine without multi-threading, thanks.

        As for the documentation, I think that the comments on the functions are a great feature of PHP.  Sometimes a particular function has quirks, or is not used in a way that the developers intended.  Rather than having strictly static documentation on the website and forcing the users to look elsewhere when they have problems, users can share their experiences right on the function’s page, which I have found invaluable.  Also, PHP is the only language I have used to date where I can go to their official website, type “http://www.php.net/functionname&#8221;, and get taken straight to the page of the function.

        To be perfectly honest with you, I think your article is rather inflamatory in its tone and need not be.  If you really want to prove how awesome these other technologies are, why not take some time and tell us how cool they are, rather than bashing PHP?

        — Doug

      • #3203706

        PHP Is Doomed

        by callred ·

        In reply to PHP Is Doomed

        Just to play devil’s advocate…

        Well, alot of the php usage is also on personal sites/non-profits, lest we forget. I dont think I am going to use J2EE on my www hosting account that came with my home internet access anytime soon (J2EE environment for free? I dont think so). And since my ISP is using Linux on this box (I dont fault them), I doubt they’re going to put Mono on it for ASP.NET. At least in this respect, I see php living on for awhile, as alot of service providers I know are not keen on switching to new technology quickly, even if it is becoming popular – such as rails.

        Furthermore, if this discussion is any realistic sampling of php developers, at least half or more sound as if they dont care about/dont see a need for threading in php. Regardless of whether they are right or wrong, if php is going to die based on developer threading demand in your view, shouldnt there have been at least a few more people agreeing with you?

        I think your views are great most of the time, and I am not taking sides with the non-threading people (I like to look at both sides of an argument in depth). However, just because you have such high standards for computing technologies and programming platforms, unfortunately does not mean that real world IT departments and project leaders see things the same way. Of course, we have no way to actually measure which programming teams “get it”, and which ones dont. But from the sounds of things, it seems like group B.

      • #3140954

        PHP Is Doomed

        by mindilator9 ·

        In reply to PHP Is Doomed

        i’m gonna sit the fence on this one. callred has a very valid point that if MT were so direly necessary for php’s survival that there would be more demand for it from its own developers and not inexperienced php developers who look down their nose at scripting languages. php has a history of adding those things that are crucially needed to compete, albeit clumsily. example, the premature addition of OOP to version 4, magic quotes, etc. yes OOP in version 4 is atrocious, but without it version 5 wouldn’t have the major improvements that it needed. i’ve read articles all over the place speculating php5’s takeover of .net. they are mere speculation of course, as is this diatribe on php and MT. i would love to hear j.ja’s explanation for java’s adoption of php: http://news.com.com/Andreessen+PHP+succeeding+where+Java+isnt/2100-1012_3-5903187.html.
        the fact is php is a survivor because it adapts. i whole heartedly agree with j.ja that MT features should be added. i can see them being immensely useful to the gtk-2 developers who are using php to create desktop apps, and of course for many types of applications that i cannot imagine. i do not see the issue as being do-or-die for php, though. j.ja would have you believe that everyone, or even a significant majority, of applications will need advanced apis for multi threading, and that simply is not so. MT is not going to cure the stateless web issue. languages like RoR are fantastic as far as i’ve heard up until the point that you want to do something unconventional with it, at which point you need to do a ton of tedious coding. i see absolutely no difference in this regard from php. php is fantastic until you want to do something that’s never been done with it before, such as multi threading, or making a desktop app. then if you’re a php developer you do one of two things: wait for the support to be developed or develop it yourself.
        j.ja if you’re such a php prophet why don’t you single handedly “save” php yourself by adding superior MT support to it? personally i would love to see .net go the way of the dodo, but at least i have the ability to acknowledge my own bias, and not throw it out as some doomsday rhetoric to be taken seriously by others. php needed better OOP; it got it. php needed better database abstraction; it got it. php needed a reliable framework; it got many. and all these features still need improving. they always will. so will any version of C, java, python, etc. does php need multi threading? i’m sure it does. it will get it. if not in the libraries then in the core engine.
        to me your article basically comes down to saying a tomato (php) is not a fruit (java, .net, RoR). technically it is, but it is just much different than what you are used to and that is why you condemn it. the php community cares enough about it to keep it alive however necessary, and you fail to take that into account when you dismiss someone’s “emotional investment” in the language. your language puts food on your table, i don’t believe for one second you don’t have an emotional investment in your paradigm of choice.

      • #3140842

        PHP Is Doomed

        by lphuberdeau ·

        In reply to PHP Is Doomed

        I’m glad to finally see a tone change in your responses.

        I am aware all those solutions related to threading I pointed to are rudimentary. I would not be surprised to hear that the only ones who ever used them are those who contributed to them. I never used them, I just know they exist for that one time I might need them. They might not be mature, but it solved a problem in a good-enough way for someone in the past. Alone, it’s good enough of an argument to say that so far, threading is not a real need in the PHP world.

        PHP might not have anything special as a language, but it does with the community. No one is trying to mimic predecessors. There is no such thing as a single solution. Design evolves over time, implementations are based on actual user needs. Even if some complain the API is inconsistent, it’s probably the most simple and comprehensive API around. Have you worked with PHP 5.1? With PDO? The XML extensions? I used XML libraries in C++ and Java. Honestly, PHP’s extensions beat them all single handed. Are the features unique? Of course they are not, but the implementation is. C# is one of those by the book languages made to fit well with UML tools, which was meant to match by the book object languages like Java. Unlike those languages, PHP is not trying to be generatable by CASE tools. It’s not trying to be multi-purpose, although it’s good enough in many situations with the CLI interface.

        As a language, PHP is easily extensible using C. All those examples you gave about the need for multi-theading is related to performance. If performance is so important, the reccommended approach in PHP would be to write a C extension. From there, you have all those features you need for threading, with your mutexes and all.

        As for RoR, it does have a few interesting concepts, but it’s not close to being a mature technology yet. No one knows how much it can scale, how good the security of the framework is or if it will be gone with the 2.0 fad. It might end up being a good platform, but it will take more time. So far, it’s all brand new. They have a small user base and could make changes quickly to their language and framework. It will be interesting to see how they handle things when it reaches a larger user base.

        For your information, I know what a mutex is for. My point was that you don’t need a mutex unless you have concurency. Since everything in PHP is mostly sandboxed, which is one of it’s strength by the way, you don’t need a mutex. The APC user cache allows to share information across requests. It handles the concurency, you don’t have to worry about it. Is it the most efficient way to do it? It’s probably not very smart for large amounts of data, but for small pieces, a mutex wouldn’t do any better.

        Hoping it clarified a few details.

      • #3139947

        PHP Is Doomed

        by jenueheightz ·

        In reply to PHP Is Doomed

            PHP is the only language I have used to date where I can go to their
        official website, type “http://www.php.net/functionname&#8221;, and get taken
        straight to the page of the function.

        MySQL also have a great site with similar user experience. Beyond the fact that functions are opened to user comments, you may type this url : http://www.mysql.com/functionname to go to the documentation in your own language !

        Therefore PHP and MySQL are fully translated to other languages. Is it the case for Java ?

    • #3226420

      Multithreading Tutorial – Part V: Mutex Performance

      by justin james ·

      In reply to Critical Thinking

      This is the fifth installment of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      While the last post used SyncLock to mark a “critical section” of code to maintain concurrency, we are now going to take a look at the Mutex class. A word of warning: as the test result show, Mutex is an extremely slow system! You want to be very careful when using Mutex. According to MSDN Magazine (September, 2006), Mutex can take 9801.6 CPU cycles to acquire a lock on a CPU without contention. In comparison to 112 CPU cycles for a Win32 CRITICAL_SECTION (which SyncLock uses), it is easy to see why the decision to use Mutex must be careful weighed.

      Here is the code used for this test:

      Public Sub MutexMultiThreadComputation(ByVal Iterations As Integer, Optional ByVal ThreadCount As Integer = 0)
        Dim twMutexLock As MutexThreadWorker
        Dim IntegerIterationCounter As Integer
        Dim iOriginalMaxThreads As Integer
        Dim iOriginalMinThreads As Integer
        Dim iOriginalMaxIOThreads As Integer
        Dim iOriginalMinIOThreads As Integer

        twMutexLock = New MutexThreadWorker

        Threading.ThreadPool.GetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.GetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        If ThreadCount > 0 Then
          Threading.ThreadPool.SetMaxThreads(ThreadCount, ThreadCount)
          Threading.ThreadPool.SetMinThreads(ThreadCount, ThreadCount)
        End If

        For IntegerIterationCounter = 1 To Iterations
          Threading.ThreadPool.QueueUserWorkItem(AddressOf twMutexLock.ThreadProc, Double.Parse(IntegerIterationCounter))
        Next

        While MutexThreadWorker.IntegerCompletedComputations < Iterations

        End While

        Threading.ThreadPool.SetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.SetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        twMutexLock = Nothing
        IntegerIterationCounter = Nothing
      End Sub

      And the MutexThreadWorker class:

      Public Class MutexThreadWorker
        Public Shared MutexStorageLock As New Threading.Mutex
        Public Shared MutexCompletedComputationsLock As New Threading.Mutex
        Public Shared IntegerCompletedComputations As Integer = 0
        Private Shared DoubleStorage As Double

        Public Property Storage() As Double
          Get
            MutexStorageLock.WaitOne()
            Return DoubleStorage
            MutexStorageLock.ReleaseMutex()
          End Get
          Set(ByVal value As Double)
            MutexStorageLock.WaitOne()
            DoubleStorage = value
            MutexStorageLock.ReleaseMutex()
          End Set
        End Property

        Public Property CompletedComputations() As Integer
          Get
            Return IntegerCompletedComputations
          End Get
          Set(ByVal value As Integer)
            IntegerCompletedComputations = value
          End Set
        End Property

        Public Sub ThreadProc(ByVal StateObject As Object)
          Dim ttuComputation As ThreadTestUtilities

          ttuComputation = New ThreadTestUtilities

          Storage = ttuComputation.Compute(CDbl(StateObject))

          MutexCompletedComputationsLock.WaitOne()
          CompletedComputations += 1
          MutexCompletedComputationsLock.ReleaseMutex()

          ttuComputation = Nothing
        End Sub

        Public Sub New()

        End Sub
      End Class

      Here are the results of our tests. All tests are for 100,000 iterations, and the results are in milliseconds per test run. This is a significant departure from previous posts in this series, where tests were performed with 1,000,000 iterations.

      TEST 1

      This test allows the ThreadPool to manage the total number of minimum and maximum threads on its own:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      953.125

      546.875

      625.000

      562.500

      656.250

      668.750

      System B

      733.886

      765.115

      624.584

      702.657

      687.042

      702.657

      System C

      671.862

      796.859

      749.985

      718.736

      765.610

      740.610

      System D

      2972.759

      2925.820

      2941.466

      2910.174

      2957.112

      2941.466

      Average

       

       

       

       

       

      1263.371

      TEST 2

      In this test, we limit the maximum number of threads to one per logical processor:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      578.125

      562.500

      640.625

      500.000

      515.625

      559.375

      System B

      624.854

      780.730

      687.042

      640.198

      702.657

      687.096

      System C

      687.486

      703.111

      718.736

      781.235

      703.111

      718.736

      System D

      2894.528

      2941.466

      2861.143

      2954.561

      2985.826

      2927.505

      Average

       

       

       

       

       

      1223.178

      TEST 3

      This test uses only one thread:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      578.125

      562.500

      609.375

      562.500

      640.625

      590.625

      System B

      640.198

      780.730

      655.813

      702.657

      765.115

      708.903

      System C

      796.859

      749.985

      781.235

      765.610

      734.360

      765.610

      System D

      2892.031

      3001.459

      2876.398

      3001.459

      3985.826

      3151.435

      Average

       

       

       

       

       

      1304.143

      TEST 4

      This test uses two concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      703.125

      500.000

      640.625

      578.125

      671.875

      618.750

      System B

      733.886

      749.500

      671.427

      718.271

      640.198

      702.656

      System C

      859.358

      687.486

      671.862

      703.111

      718.736

      728.111

      System D

      2953.635

      2906.752

      2891.124

      2906.752

      2984.890

      2928.631

      Average

       

       

       

       

       

      1244.537

      TEST 5

      Here we show four concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      562.500

      609.375

      531.250

      515.625

      546.875

      553.125

      System B

      765.115

      655.813

      687.042

      718.271

      733.886

      712.025

      System C

      781.235

      749.985

      828.109

      718.736

      874.983

      790.610

      System D

      2954.561

      2985.826

      2923.296

      2907.663

      2938.928

      2942.055

      Average

       

       

       

       

       

      1249.454

      TEST 6

      This test uses eight concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      640.625

      562.500

      609.375

      625.000

      546.875

      596.875

      System B

      890.032

      671.427

      718.271

      687.042

      640.198

      721.394

      System C

      703.111

      812.484

      734.360

      765.610

      796.859

      762.485

      System D

      2985.826

      2970.194

      3000.518

      2875.496

      2969.263

      2960.259

      Average

       

       

       

       

       

      1260.253

      TEST 7

      Finally, this test runs 16 simultaneous threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      609.375

      562.500

      546.875

      625.000

      578.125

      584.375

      System B

      749.500

      780.730

      655.813

      671.427

      640.198

      699.534

      System C

      828.109

      718.736

      749.985

      796.859

      703.111

      759.360

      System D

      5438.439

      5407.184

      5329.045

      5157.141

      5235.279

      5313.418

      Average

       

       

       

       

       

      1839.172

      System A: AMD Sempron 3200 (1 logical x64 CPU), 1 GB RAM
      System B: AMD Athlon 3200+ (1 logical x64 CPU), 1 GB RAM
      System C: Intel Pentium 4 2.8 gHz (1 logical x86 CPU), 1 GB RAM
      System D: Two Intel Xeon 3.0 gHz (2 dual core, HyperThreaded CPUs providing 8 logical x64 CPUs), 2 GB RAM

      It is extremely important to understand the following information and disclaimers regarding these benchmark figures:

      They are not to be taken as absolute numbers. They are taken on real-world systems with real-world OS installations, not clean benchmark systems. They are not to be used as any concrete measure of relative CPU performance; they simply illustrate the different relative performance characteristics of different multithreading techniques on different numbers of logical CPUs, in order to show how different processors can perform differently with different techniques.

      Testing revealed some very unusual performance characteristics. Although a little bit of “spin up” in .Net code is always expected, due to the GAC and what not, initial test runs sometimes took much longer than subsequent tests. Furthermore, tests with high numbers of iterations (such as 1,000,000) seemed to take quite some time to “spin down.” Where the problem is cannot be found without much more in depth investigation. However, it did seem that there is definitely a “break or break” point on an individual system basis for this code, at which point something is going very wrong outside of the code itself. It may quite well be the garbage collector; the code itself had finished running, but the application was still consuming massive amounts of RAM and CPU time long after it reported success. There may be some possible tweaks that can be made to the code itself to assist the garbage collector, but altogether, it seems like Mutex is indeed an incredibly slow system. The speed difference between the Xeon system and the other systems seems to indicate that context switching is also killing performance, since it is running many more threads at once.

      J.Ja

    • #3204207

      Building a New Personal IT Infrastructure

      by justin james ·

      In reply to Critical Thinking

      So I have decided that it is time to finally overhaul my existing development environment at home. There are a lot of reasons behind it, but to put it simply, my infrastructure at home is meeting my needs, but it has me nervous and frustrated. Currently, I have two machines at home: an almost mid range eMachines (Sempron 3200, 1 GB RAM) running Windows XP Pro that I use for day-to-day work, development, and mild gaming, and an ancient white box (Athlon 1900+, 256 MB RAM) running FreeBSD 5.3 for *Nix development work and as a server for personal Web sites.

      The whole idea of replacing my systems has been in the back of my head for a while. I want the server to be running a RAID 1, because a drive failure on that machine will make my life extremely miserable for a day or two. It is running a very old version of FreeBSD (5.3) that was installed as my first real foray into *Nix systems administration nearly two years ago, and a lot of mistake were made in the configuration of the system. Upgrading the OS does not seem like a great idea under those circumstances; I have things installed from ports, and things installed via make install, and between the two, I am really not sure that I want to find out what will happen on a build world. And the motherboard does not have a video card built into it, and I do not have any spare video cards, so working on it involves SSH (which I have no problem with, but is useless in case of hardware or boot problems), or turning off both PCs, removing the video card from my desktop, putting it in the server, and restarting it, which is hardly ideal.

      The desktop PC is good enough to write and test code on, but it is incredibly slow. I cannot blame the system itself; if I was just doing the day-to-day work (email, Web browsing, the occasional game) it would be just fine. It is the development work that is killing it. VMWare, Visual Studio, the 2 GB text files I sometimes load into a text editor, all of these things just do not happen so well on that system. Even worse, it has a video card which tends to reset itself doing period of high stress. It is certainly not “Vista Ready” for anything other than acting like XP with different colors; the Aero interface is out of the question.

      So I am getting in the final stretch of figuring out what my new infrastructure if going to look like. My requirements for the desktop PC are:

      * Intel Core 2 Duo CPU, 6400 at a minimum, 6600 preferred
      * 1 GB RAM, with room to grow to 2 GB or 4 GB when Vista is ready
      * Multimedia card reader (optional)
      * Silent case
      * Excellent video card
      * Floppy drive
      * SATA II storage system, with RAID 0/1 capabilities
      * Extremely fast hard drive for the OS and applications, either a very fast single drive, or a fast RAID 1
      * Extremely reliable hard drive for data storage, in the 160 GB ? 200 GB range, RAID 1 preferred
      * Dual optical drives, 1 DVD dual layer burner, one plain DVD drive (minimum)
      * Gigabit Ethernet NIC
      * Windows XP Pro

      And the server should look like:

      * Adequate, modern CPU
      * 512 MB RAM minimum
      * Fairly quiet case
      * Basic optical drive (DVD or CD-ROM)
      * Basic video card
      * Floppy drive
      * Extremely reliable storage, 500 GB, RAID 1
      * Gigabit Ethernet NIC, possibly a second 10/100 NIC
      * FreeBSD, Solaris, or Linux

      Additionally, I am aiming to put the following items into my life:

      * 20″ Widescreen LCD monitor
      * 4 port KVM
      * 8 port Gigabit Ethernet switch
      * New mouse
      * Ergonomic keyboard

      The plan is to achieve the following goals:

      * Significantly improve the ergonomics of my home work environment to halt and hopefully reverse the damage that computing has done to my body
      * Redundancy of data on disk and near-line access to backups
      * Refresh the server to be much less kludged together
      * “Vista Ready” desktop
      * Not need to purchase new hardware for 4+ years
      * Top-of-the-line performance on the workstation, well above average gaming and multimedia performance
      * Able to run virtual machines without too much of a performance hit for testing, debugging purposes, particularly for product reviews and experimenting with *Nix
      * Gigabit Ethernet for fast access across the network
      * One monitor for “utility” purposes, particularly when watching movies or playing games, or testing and debugging code

      Currently, I am looking to use a pair of Western Digital RE (not the RE2 drives, they are just a touch too expensive) or possibly Hitachi TK7500 drives in the 160 GB capacity range for the main partition in the workstation, and a pair of less expensive, 250 ? 320 GB drives for data storage. The server, believe it or not, tends to more or less sit there. It will have a pair of 80 GB drives in a RAID 1 to work on, and a single 500 GB drive for backup purposes (both itself and the workstation). I already have more than enough optical drives sitting around. The workstation’s video card will be extremely high end, and probably ATI because I can get an outstanding deal on an ATI video card.

      The RAM will be the best stuff I can get my hands on, mostly likely the Crucial Ballistix RAM at 800 mHz, and most likely 1 GB at first, and upgrading to 2 GB when I install Vista. The workstation’s motherboard will be an Asus P5B, and the CPU will be either an Intel Core 2 Duo 6600 or a 6400, depending upon my mood and price when I make the purchase.

      The only really tough problems are the cases. For the server, I am tempted to just take an existing case, line it with Dynamat or underbody paint, and be done with it. That should cut the noise levels dramatically while not impacting heat dissipation. I have no clue what to do with the wokstation, but a MicroATX case just will not be big enough. On the other hand, the typical tower case is too big. Why do I need five 5.25″ external drive bays? I really only need two of them, and four internal drive bays (minimum).

      Astute readers will notice that both machines will be getting floppy drives. There are too many utilities that require a floppy drive out there still, like Windows System Restore. And as anyone who has encountered it can tell you, it is quite disturbing to discover that Windows installation requires a floppy drive for those third party drivers when no one puts a floppy drive into a system at this point in the game.

      I am going to build slowly, buying the things that immediately help me and never go down in price (NICs, switches, KVM, cases, input devices, etc.) up front, and then slowly doing one system at a time.

      Any suggestions for me from the folks out there?

      J.Ja

      • #3203340

        Building a New Personal IT Infrastructure

        by tony hopkinson ·

        In reply to Building a New Personal IT Infrastructure

        If memory prices are running cheap at the time you purchase at least double to 2 gig on the workstation. It will be more than worth it, especially in visual studio, and it’s cousins. It’s a quality tool, but MS just have to use as much memory as they can. I’m running it under 1G on a bit better machine than what you’ve got and I’ve seen my PC flinch a few times, especially when I’ve got my SQL 2005 tools up and a couple of local instances of SQL server. Though you have the luxury of offloading the latter to your server.

      • #3203907

        Building a New Personal IT Infrastructure

        by justin james ·

        In reply to Building a New Personal IT Infrastructure

        Tony –

        Yes, I would much prefer 2 GB (who wouldn’t?), but I actually have a high tolerance for low RAM. Up until a year ago, I was running with 256 MB RAM, to do Visual Studio work! Did I enjoy? Nope. I agree, between Vista and the development work I do, 1 GB RAM is really barely enough, but if I am chooosing between more RAM and better RAM, I will take the high quality RAM any day of the week. Over the course of my computing lifetime, I have had too many problems cuase by cheap RAM to think that a DIMM is a DIMM is a DIMM.

        The equipment specs have changed already, I am probably getting an outstanding value on an extremely high end Intel board and a Core 2 Duo 6300, I will use the 6300 in the server in conjunction with a lower end MB (my server really needs very little grunt in it), and use the high end MB and a 6600 in the workstation.

        Unfortunately, SQL server does not get offloaded onto the server, I run *Nix on my server (currently FreeBSD, but I am open to change). I do use MySQL, and I will be evaluating PostgreSQL. Sadly, all too many pieces of software in the FOSS world assume MySQL, as if Oracle, SQL Server, DB2, PostgreSQL, Ingres, etc. do not exist. If you are running FOSS software, unless it is very high quality that has been tested with other DBs, you are locked into MySQL whether you want to be or not. Personally, I like MySQL and have nothing against it, but it is odd how FOSS, which is all about “choice” locks you in. Likewose for Linux. All too often, I find that FOSS developers assume that we are all running Linux. It is rediculous. Personally, I prefer BSD over Linux, and think it is on the same footing as Solaris (for different reasons; Solaris is a much better OS on a technical level, Linux is much more friendly). Again, I find the Linux-centric viewpoint of the FOSS movement as ironic at best.

        I digress, as usual.

        J.Ja

      • #3138422

        Building a New Personal IT Infrastructure

        by tony hopkinson ·

        In reply to Building a New Personal IT Infrastructure

        I have experience with MySQL, up to version 4. Never got to PostGreSQL. As far as I’m aware it still has more features the MySQL (that only just got stored procedures (v5)). Jacqui, who usually knows what he’s talking about on things ‘nix says its scales much better as well. 

        Have you tried SQL 2005 yet, 3 – 5 times slower on adhoc or parameterised queries. Msoft’s recomendation don’t use ’em move to stored procs.

      • #3138360

        Building a New Personal IT Infrastructure

        by justin james ·

        In reply to Building a New Personal IT Infrastructure

        Tony –

        Sadly, MySQL has a virtual lock on being the DB backend of choice (and frequently, the only usable DB backend!) for FOSS software, it seems. Using MySQL over any other RDBMS is not really a matter of choice. I find it rather ironic, given that FOSS is supposed to be partially about choice.

        I have used SQL 2005, but to be honest, what we are using it for is nothing deep enough to actually see any performance differences. I think the biggest table I have in it is about 2,500 rows, for our custom project time management system. Our DB usage tends to be FoxPro (gag me with a spoon), with the occassional Oracle work (only for development for our customers that use Oracle and let us download their data to work with in our environments), and MySQL (for our big, heavy usage data runs since I have a FreeBSD server dedicated solely to MySQL). My server that has SQL 2005 on it is just plain fugly; at one point, it was our only server, so there is something running (but not showing in task manager) consuming 25% – 50% of CPU at any given time; I am pretty sure it is Oracle related. So we have 1 server that has SQL 2005, Oracle, acts as our internal domain controller, DNS lookups, VPN access, DHCP, the works, another server that is solely Exchange and external DNS, a third server that is purely FreeBSD for MySQL, and a fourth server that is external DNS and file storage. I know, it’s a WAY ugly setup, but I have severe budget constraits at my day job, and I was grateful to get the storage server, let alone be able to upgrade the network to GigE. And now I have to convince the boss to let me upgrade at least one PC (mine) to be Vista-capable, since it is a P4 2.8 (so old it does not even support HyperThreading) that is really gasping for a air a lot of the time. More to the point, it’s a development box, so it needs to be able to run development level stuff. He won’t be thrilled about it (he’s very used to dropping $500 on a white box and calling it a day), but that’s how it is.

        J.Ja

    • #3203358

      Attack of the Killer Zombie Project

      by justin james ·

      In reply to Critical Thinking

      The project that just will not die has just knocked on my door. This was one of those projects were I made just about every freelance development mistake out there, with an extremely “challenging” customer. The last I heard from this customer was probably sometime in February, 2006, maybe March. I had received 75% of the payment already for the project, and my part of the project was 95% done.

      The customer wanted an eCommerce site and the initial project specs were fairly routine. I made a flat price quote, and when the customer pushed for more and more and more, I rolled over and gave it to him without charging him more. Without bringing up any hard numbers, let me just say that the hours that I have put into this project compared to the amount that I am getting paid bring me to around minimum wage. I am not making this up. It is really that bad. I never heard from the customer again, we had been stuck on the part of the project where he enters in the information for a few thousand products. I did everything in my power to make this smoother for him; I made changes to the online administration, and even made a custom Excel spreadsheet filled with validation macros to construct the needed SQL statements to get the products into the database.

      The real problem was that the customer wanted a complex “build a solution” system that allowed the components to be purchased separately, so there needed to be complex relationships between the products, and the customer just had no idea how to tie these things together.

      I was actually quite grateful to never hear from this guy again. I probably would have been happy even if I never saw a cent from him and he owed me money, because he was that big of a pain in the rear.

      The customer just called me this morning, laughed about waking me up (I was still sort of asleep), and then told me he is ready to get this finished.

      Hooray.

      J.Ja

      • #3203837

        Attack of the Killer Zombie Project

        by marioat ·

        In reply to Attack of the Killer Zombie Project

        Some people are just hagglers, i.e. people who want everything without paying for it. From the people who pester retailers with frivolous returns and bum coupons, up the food chain to the big business world, some people are skilled at getting around what the service they require costs. Why can’t I be more like that? OTOH it stinks for the person whose work is being devalued, in other words disrespected.

        There’s a lot of hostility in the discussion areas of the site (among other places) of dwindling pay rates, H1B visa abuse, no opportunity for entry level IT people, etc. Somewhere along the line a definition of the word “salary” needs to be established: the price an employer (or client) pays for services rendered over a defined period of time, be it an hour or a week. My employer buys, effectively, the fruits of my labor. They do NOT own me, anymore than the client owns you or any other contract employee. And the less said about what your expenses must be to implement solutions, and how much you probably ended up eating for this guy who must have thought you were kidding when you explained it to him, the better.

        And waking someone up to nag about work is just rude. That’s my 2 cents.

      • #3140806

        Attack of the Killer Zombie Project

        by mark miller ·

        In reply to Attack of the Killer Zombie Project

        There are some customers who should not be taken on as clients. I’ve seen this in years past when I’ve worked for consulting companies. I couldn’t tell you how to avoid them though. I don’t know what the telltale signs are. Even though fixed bids are how customers typically want to do things, IMO it just puts all the risk in the consultant’s lap. It’s inevitable that the customer is going to change requirements sometime during the project. You can count on it. The problem is they want the fixed bid estimate for the first set of requirements only. True, you could pad it, but the question is by how much. Since there’s a strong ethical temptation to not overcharge, or a market sensibility to keep the price low due to competition, you can end up in the situation where the customer has you over a barrel, especially if the requirements have vague language you could drive a truck through. I’ve seen this happen on several occasions.

        This is only a theoretical suggestion, but the answer may be to put your foot down and say that for the price that you negotiated earlier you can only deliver X set of features, including all initial requirements, and if they want to add more you need to talk about another contract with a new budget for those features. I know there can be a strong internal tendency against doing this. I’m no paragon of putting my foot down with a customer. I have a strong sense of serving the customer’s needs, producing a decently designed system, and I HATE the idea of delivering a non-working system or one that does not satisfy them, for financial reasons. I want them to be happy. But as a consultant on contract I’ve seen where that leads, too. The customer can insist on only paying on the initial estimate, which didn’t include a lot of contingencies that became necessary later. I haven’t experienced getting my pay rate wittled down to minimum wage, but I’ve seen my hourly rate effectively cut in half.

      • #3140798

        Attack of the Killer Zombie Project

        by graham.gee ·

        In reply to Attack of the Killer Zombie Project

        I think you can add additoinal charges. Although you should have added a clause to the effect of how long this project should take there is still a place for what’s reasonable within the law. You should check on your rights you might find you are not as tied in as you think.

        Good luck
        GG

      • #3140728

        Attack of the Killer Zombie Project

        by jr_hearty ·

        In reply to Attack of the Killer Zombie Project

        Did you include a Project Objective statement or a statement of scope to the customer? On most system analysis & design projects, scope creep must be proactively managed. 

        Scope creep will ruin a project – customer has no clear expectations and you are stuck with endless additions/changes. Without any written statement that clearly defines the project, the deliverables, the time line, & the budget, you have no real grounds to stand on.

        Before proceeding any further, you need to get the client to sign a scope statement that clearly lists what you are responsible for providing, the deadline for the project, and the amount the client will pay. If the client won’t agree, drop the project.

      • #3139957

        Attack of the Killer Zombie Project

        by justin james ·

        In reply to Attack of the Killer Zombie Project

        Sadly, this is a direct result of me doing “business by handshake.” I know, it sounds rediculous to say it in this day and age, and I used to get burned a lot when I was letting someone else do the customer end of things. But since I started handling the customers myself, this is the only time I got burned. Every other time, I managed to have zero project or timeline creep, and a customer who was extremely cooperative and understanding, and who fully understood that Web sites do not magically appear without any work on their end.

        On this one, the customer had discussed a few features which seemed potentially tricky. I evaluated the OSS shopping cart that was being used, described its functionality to him, he stated that he understood and that it met his needs. It turns out that 1) he changed his mind, and wanted more and 2) the OSS shopping cart used was a complete and utter piece of garbage. Between the customer changing his mind, and the completely amateurish coding on the OSS software, the feature requests literally took many hundreds of hours to be performed and tested, while the pay was only in the range of a few thousand dollars (thus, the “minimum wage” computation).

        The biggest problem, though, is that the customer seems to think that data just automagically appears on a Web site. I explained up front since Day 0 that he was responsible for entering in the thousands of products, editing their pictures, etc., not me. I did offer to perform standard image editing (crop, resize) for an additional fee, which he declined. He literally spent months last winter harrassing me every week about the site not being done… meanwhile he had gotten me maybe 50 products to put into it. Likewise, he would spent multitiple hours on the phone with me discussing where the “Continue Shopping” button post-checkout should lead the customer, hours that he could have spent getting product information together.

        So really the responsibility falls a lot on my shoulders (for working on trust and being a total pushover), some of it on the customer (he does push for things that we never discussed), and some of it on “who knew?” (for example, “who knew that the shopping cart would be so difficult to make simple changes to?” Am I sore? Sure. I really am working at about minimum wage on the project, and since there are many more profitable ways to spend my time, I would prefer not to. I did gain a ton on experience on the project though, experience which is quite useful and valuable to me.

        And I will say this: while I still will do business on a handshake or based on trust, it has to be with someone I already trust.

        J.Ja

      • #3139662

        Attack of the Killer Zombie Project

        by mark lefcowitz ·

        In reply to Attack of the Killer Zombie Project

        The hardest part of being an independent contractor – being in business for yourself ? is learning how to protect yourself.

        All of us have under-bid jobs and had to eat the loss, or negotiated contracts that did not specify adequately important protections to our own interests.  All of us have had cranky, unrealistic, and perhaps even crazy customers.  By making mistakes, and learning from them, you will grow as a business-person.

        Remember it is a business; you are the one who has an interest in protecting your own interests.

        Some rules that I have found to be useful before a contract is signed:

        1. If the customer can not tell you exactly what they want, specifically spelled out as written metrics and deliverables in the contract or as an attachment to the contract, and they are not willing to pay you to do the analysis of what needs to be done, walk away.

        2. If the customer has unrealistic timeframes and/or expectations (i.e., to implement a process improvement effort in 8 weeks), walk away.

        3. If the project involves interacting with SMEs and other vendors, and the customer can not give you a clear idea of the dependencies that may affect your piece of the project, walk away.

        4. If the customer is expecting you to fulfill multiple roles in parallel (i.e., project manager and analyst), walk away.

        5. If during the screening process and interview process, you feel like you are being interrogated, walk away.

        6. If the customer does not treat you with courtesy and friendliness, walk away.

        7. If the customer expects you to wait more than net-30 for payment of an invoice (and as a rule I almost always insist on net-15), walk away.

        8. If the customer is unwilling to pay a substantial penalty for late payment of an invoice, walk away.

        9. If you have a nagging feeling that something is just not right, and can?t figure it out . . . don?t sign the contract, walk away.

        Remember always, if the customer knew how to do what you do, he wouldn?t be talking to you about getting you to do it.

        Be courteous, be professional, and be clear about what you need to equitably protect your own interests before you sign on the dotted line.

        Once you sign a contract, you are stuck with the consequences ? both good and bad.

        So learn to walk away.

      • #3202968

        Attack of the Killer Zombie Project

        by esalkin ·

        In reply to Attack of the Killer Zombie Project

        One of my early jobs was a network admin for a small company that offered a variety of computer based services.  Much of this involved software that was custom written for the company by contractors.  The owner (a lawyer) would continually require minor changes and revisions.  All the time he was running his business using the “unfinished” software.  This guy was using the delays to get free support on a product that he had not even finished paying for.  I questioned the programmer as to why he did not put his foot down.  He replied: “His business is growing so fast. He has promised me two more contracts!”

        P.T. Barnum was right.

      • #3202748

        Attack of the Killer Zombie Project

        by dave.clarke ·

        In reply to Attack of the Killer Zombie Project

        It’s unfortunate that you don’t have a contract – just a handshake. But you should still be performing a robust change management procedure every time the scope changes.

        Surely the client couldn’t argue with a documented change specification that is scoped, sized and costed. Whether the change results in the raising of a variance to the agreed amount depends upon the significance of that change. But it (at least) gives you an opportunity to reinforce the uncharged scope changes with the client while supporting your reasonable request for a payment variation.

        Clearly some things need to be provided gratis, however others of a significant nature should be billable. If the client won’t endorse that approach restrict your delivery to the agreed functionality.

        That said, it’s always best to have the relationship formalised with a clear scope statement in the absence of a functional specification, contract and project charter defining roles and their responsibilities. Otherwise how can you possibly provide a reliable quote when you don’t have the project boundaries or an understanding of who is doing what and when?

      • #3202644

        Attack of the Killer Zombie Project

        by just_chilin ·

        In reply to Attack of the Killer Zombie Project

        About six years ago when I started consulting, I build an invoicing system for a customer who owned a small family business (selling fresh produce et. cetera.) This guy paid what we had initially agreed on then became a pain in my rear. When his printer stopped working – he called me, when his cable moderm (from COMCAST) had problems, he called me. This guy called my cell phone for just about anything. I had to swtich numbers and stayed away from him ever since.

    • #3139702

      Multhreading Tutorial – Part VI: Monitor Performance

      by justin james ·

      In reply to Critical Thinking

      This is the sixth installment of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      So far, we have covered single thread performance, and SyncLock and Mutex for maintaining data concurrency. This time, we will be using Monitor for concurrency. The differences between the three methods are subtle, but important. SyncLock ensures that a particular block of code is only being run by one thread at a time. Mutex is a class of its own with methods to ensure that only one thread is using it at a time; by attempting to lock the Mutex and unlocking it when finished, we are assured that only one thread is performing an operation at a time. In contract, Monitor is a singleton class that performs the locking itself, using objects as a key for the locking.

      One very important note about using the Monitor class: do not use primitives as the object to lock! Because the primitive gets automatically wrapped in an object when passed to Monitor, each time you attempt the lock on the primitive, it is considered a separate object, despite the fact that the same primitive was passed in. As a result, your lock will not properly occur, and concurrency will be lost.

      Here is the code used for this test:

      Public Sub MonitorMultiThreadComputation(ByVal Iterations As Integer, Optional ByVal ThreadCount As Integer = 0)
        Dim twMonitorLock As MonitorThreadWorker
        Dim IntegerIterationCounter As Integer
        Dim iOriginalMaxThreads As Integer
        Dim iOriginalMinThreads As Integer
        Dim iOriginalMaxIOThreads As Integer
        Dim iOriginalMinIOThreads As Integer

        twMonitorLock = New MonitorThreadWorker

        Threading.ThreadPool.GetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.GetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        If ThreadCount > 0 Then
          Threading.ThreadPool.SetMaxThreads(ThreadCount, ThreadCount)
          Threading.ThreadPool.SetMinThreads(ThreadCount, ThreadCount)
        End If

        For IntegerIterationCounter = 1 To Iterations
          Threading.ThreadPool.QueueUserWorkItem(AddressOf twMonitorLock.ThreadProc, Double.Parse(IntegerIterationCounter))
        Next

        While MonitorThreadWorker.IntegerCompletedComputations < Iterations

        End While

        Threading.ThreadPool.SetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.SetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        twMonitorLock = Nothing
        IntegerIterationCounter = Nothing
      End Sub

      And the MonitorThreadWorker class:

      Public Class MonitorThreadWorker
        Private Shared ObjectStorageLock As New Object
        Private Shared ObjectComputationsLock As New Object
        Public Shared IntegerCompletedComputations As Integer = 0
        Private Shared DoubleStorage As Double

        Public Property Storage() As Double
          Get
            Threading.Monitor.Enter(ObjectStorageLock)
            Return DoubleStorage
            Threading.Monitor.Exit(ObjectStorageLock)
          End Get
          Set(ByVal value As Double)
            Threading.Monitor.Enter(ObjectStorageLock)
            DoubleStorage = value
            Threading.Monitor.Exit(ObjectStorageLock)
          End Set
        End Property

        Public Property CompletedComputations() As Integer
          Get
            Return IntegerCompletedComputations
          End Get
          Set(ByVal value As Integer)
            IntegerCompletedComputations = value
          End Set
        End Property

        Public Sub ThreadProc(ByVal StateObject As Object)
          Dim ttuComputation As ThreadTestUtilities

          ttuComputation = New ThreadTestUtilities

          Storage = ttuComputation.Compute(CDbl(StateObject))

          Threading.Monitor.Enter(ObjectComputationsLock)
          CompletedComputations += 1
          Threading.Monitor.Exit(ObjectComputationsLock)

          ttuComputation = Nothing
        End Sub

        Public Sub New()

        End Sub
      End Class

      Here are the results of our tests. All tests are for 1,000,000 iterations, and the results are in milliseconds per test run

      TEST 1

      This test allows the ThreadPool to manage the total number of minimum and maximum threads on its own:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      18609.375

      21125.000

      15187.500

      16953.125

      14859.375

      17346.875

      System B

      16890.301

      13624.738

      19702.747

      19155.882

      25280.765

      18930.887

      System C

      16265.625

      28687.500

      18109.375

      15765.625

      19015.625

      19568.750

      System D

      30468.945

      30547.071

      30422.070

      30390.820

      30484.570

      30462.695

      Average

       

       

       

       

       

      21577.302

      TEST 2

      In this test, we limit the maximum number of threads to one per logical processor:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      19000.000

      17875.000

      16109.375

      19937.500

      17546.875

      18093.750

      System B

      28765.073

      20327.735

      25983.876

      30952.531

      18812.139

      24968.271

      System C

      22406.250

      34031.250

      36984.375

      45703.125

      38093.750

      35443.750

      System D

      30453.320

      30437.695

      30484.570

      30359.569

      30515.820

      30450.195

      Average

       

       

       

       

       

      27238.992

      TEST 3

      This test uses only one thread:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      17625.000

      13609.375

      15921.875

      18000.000

      15890.625

      16209.375

      System B

      19218.381

      14812.216

      24437.031

      20030.865

      37702.401

      23240.179

      System C

      26562.500

      22828.125

      24218.750

      34640.625

      31171.875

      27884.375

      System D

      30453.320

      30437.695

      30406.445

      30515.820

      30562.696

      30475.195

      Average

       

       

       

       

       

      24452.281

      TEST 4

      This test uses two concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      13468.750

      14687.500

      15796.875

      17312.500

      13625.000

      14978.125

      System B

      29124.441

      22187.074

      21077.720

      13640.363

      16859.051

      20577.730

      System C

      16625.000

      15687.500

      18375.000

      17406.250

      17296.875

      17078.125

      System D

      30453.320

      30265.819

      30437.695

      30468.945

      30422.070

      30409.570

      Average

       

       

       

       

       

      20760.888

      TEST 5

      Here we show four concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      24515.625

      25187.500

      15546.875

      26234.375

      25125.000

      23321.875

      System B

      33061.865

      34436.839

      31327.524

      32593.124

      18484.020

      29980.674

      System C

      24375.000

      21062.500

      20656.250

      23750.000

      20531.250

      22075.000

      System D

      30406.445

      30390.820

      30468.945

      30328.319

      30531.445

      30425.195

      Average

       

       

       

       

       

      26450.686

      TEST 6

      This test uses eight concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      26156.250

      25593.750

      25328.125

      25906.250

      26109.375

      25818.750

      System B

      44889.763

      34108.720

      22905.810

      19077.759

      16843.427

      27565.096

      System C

      34796.875

      34343.750

      30812.500

      33718.750

      21296.875

      30993.750

      System D

      30625.196

      30531.445

      30328.319

      30468.945

      30406.445

      30472.070

      Average

       

       

       

       

       

      28712.417

      TEST 7

      Finally, this test runs 16 simultaneous threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      26109.375

      25421.875

      25640.625

      25203.125

      25281.250

      25531.250

      System B

      31296.274

      22093.326

      16359.061

      42827.303

      20687.103

      26652.613

      System C

      41296.875

      32125.000

      34078.125

      32781.250

      29984.375

      34053.125

      System D

      36890.861

      50687.824

      50672.199

      50547.199

      50578.449

      47875.306

      Average

       

       

       

       

       

      33528.074

      System A: AMD Sempron 3200 (1 logical x64 CPU), 1 GB RAM

      System B: AMD Athlon 3200+ (1 logical x64 CPU), 1 GB RAM

      System C: Intel Pentium 4 2.8 gHz (1 logical x86 CPU), 1 GB RAM

      System D: Two Intel Xeon 3.0 gHz (2 dual core, HyperThreaded CPUs providing 8 logical x64 CPUs), 2 GB RAM

      It is extremely important to understand the following information and disclaimers regarding these benchmark figures:

      They are not to be taken as absolute numbers. They are taken on real-world systems with real-world OS installations, not clean benchmark systems. They are not to be used as any concrete measure of relative CPU performance; they simply illustrate the different relative performance characteristics of different multithreading techniques on different numbers of logical CPUs, in order to show how different processors can perform differently with different techniques.

      You will see that while Monitor is consistently a bit slower than SyncLock, it is nearly as fast, and is much faster than Mutex. So why have different methods at all? Well, there are some very good reasons for it. Each method has its own purposes. Although SyncLock is marginally quicker than Monitor, Monitor locks on a per-object basis while SyncLock locks entire blocks of code at a time. For a single line where only one variable is being used and requires concurrency (such as an incrementer), SyncLock is the better choice. But for something more complex, such as many lines of code that deal with many variables, only a few of which need concurrency, Monitor is a better choice. It is preferable to let other threads do as much of that code as they can, blocking only on certain parts, than to have the entire block of code held up. Mutex has capabilities that do not exist in Monitor or SyncLock; for further information, check the documentation.

      J.Ja

    • #3140374

      The Firefox Market Share Myth

      by justin james ·

      In reply to Critical Thinking

      One of the persistent pieces of “common knowledge” out there is the idea that Firefox is slowly grinding Internet Explorer into the ground. Along with this “common knowledge” is the assumption by some Web developers that since the Firefox Revolution is nigh, there is no real reason to take Internet Explorer into consideration. Sadly for the Firefox fans out there, this simply is not true.

      Where does this piece of “common knowledge” come from? A lot of people point towards IE’s declining market share, FF’s rising market share, and FF’s current market share of about 30%. However, the people who point to these numbers are definitely failing Statistics 101. As I said in June 2006 (“Be Careful That Your Data Does Not Lie”), first order numbers, are not very useful, and are essentially worthless without at least some baseline numbers to be compared to.

      Having been curious about this for some time, I decided to see if Firefox’s adoption rates were the stuff of legends or not. And I have found that Firefox is indeed not a superhero, but an average Joe.

      Using W3School’s month-by-month browser market share numbers, I came up with some startling facts: Firefox’s market share growth (and indeed, the entire Gecko family’s market share growth) since the introduction of Firefix is actually less than it was before Firefox was introduced.

      Here is my chart of calculations:

      Browser market share

      The Pre-Firefox velocities and accelerations are based on 12 months of data, and the Post-Firefox numbers use 21 months of data.

      Note: I recognize that these numbers and conclusions may be controversial; as soon as I can (technical issues), I will make the full spreadsheet available for download. I will also be more than glad to use alternative market share numbers from a different source, as long as the alternate source measure a large sample of users and is of a “general purpose” interest, to avoid potential skew caused by a niche audience.

      [Added: 9/29/2006: Download the spreadsheet]

      What is happening? Well, what you are seeing is that the velocity (rate of market share growth) is lower for the Gecko browsers in the 21 months after Firefox came out than it was in the 12 months before Firefox. To eliminate any doubt, I broke the numbers out into Firefox and Non-Firefox Gecko browsers, to show that it is not simply Netscape or Mozilla destroying the numbers.

      What we see in these numbers, is that while the bulk of IE’s market share loss since FF came out is due to FF, that loss rate is smaller. More important to note is that Firefox came out of the gate with a 16.6% market share in January 2005, and the Gecko market share was at 18.4% in December 2004. In other words, Firefox quickly supplanted the other Gecko browsers, and as Firefox immediately became the “face” of Gecko, Gecko’s growth rate wilted. Or to put it even more simply, Firefox is not as attractive to users as Mozilla and Netscape were.

      I was suspicious. The numbers just did not jive with what “common knowledge” was. Indeed, I had bought into the idea that Firefox had great accelerated IE’s demise myself. I thought that maybe the numbers were off, but many sources report the 30% number (give or take a few percentage points) as Firefox’s current market share as well. And I have never heard anyone argue against W3School’s numbers in the past. I thought that maybe Firefox’s immediate 16.6% market share in January 2005 seemed wrong, so I checked Firefox’s site to see when it was officially released as Version 1.0: November 4, 2004. so yes, the W3School numbers do not break out Firefox as a separate browser for two months initially. But even that seems pretty irrelevant; the raw numbers do not show Gecko’s market share growing by more than 1.6% in any of the months between July 2004 and January 2005; and the 1.6% number is in November 2004. So even accounting for the fact that Firefox was being rolled into Gecko for some time, which did cause a slight spike in Gecko market share capture, it is difficult to say that Firefox is driving adoption of itself, or causing IE’s demise.

      At best, we can conclude that while it was bound that non-IE browsers would eventually hit the “glass ceiling” of people who will never leave IE, it may have raised that ceiling a bit.

      J.Ja

      • #3140359

        The Firefox Market Share Myth

        by rahulbatra ·

        In reply to The Firefox Market Share Myth

        This is certainly one of the better non-biased studies I’ve seen for sometime. Most people, by deafult, are against Microsoft in every field, be it browser wars or OS wars. However there is another facet of the browser arena we should be looking into. Adoption is beyond numbers and statistics. While I would not question your extensive research, I certainly believe that Firefox is a major player in the market because the web-designers and programmers feel so.

        Before Firefox (and in the post Netscape glory days), most designers or web programmers used to test their site using Internet Explorer, because they never felt a need to do otherwise. They could be sure that almost everyone would be using the same thing. But now, most of them do use Firefox as a secondary, if not a primary, testing enviornment. The reason for this may be the ‘common knowledge’ you refer to, but Firefox is one of the first browsers after Netscape 4.7x series to be taken seriously.

      • #3138598

        The Firefox Market Share Myth

        by justin james ·

        In reply to The Firefox Market Share Myth

        rahulbatra –

        I agree 100%! Firefox’s market share is not to be taken lightly at all. 27% of the market is still a substantial amount, no matter how you cut it. Web developers and designers that ignore Firefox compatability are asking for big trouble. Personally, I do exactly what a lot of other Web developers and designers do, which is to have both browsers installed; one thatgets used for everything, and the other gets used for testing only, but the point is, we are still testing on bother browsers. With a 27% market share, it is what we have to do.

        J.Ja

      • #3138477

        The Firefox Market Share Myth

        by emromero ·

        In reply to The Firefox Market Share Myth

        I see a missunderstanding of “common knowledge” because I don’t see/read that many people telling FF will have the greatest market share in the near future, so your “analisis” is proven wrong something it is simply no-common-knowledge.   Common knowledge for firefox knows:
        – FF has proven to eliminate IE’s virtual monopoly for browsers.
        – Has become a “mainstream browser” in Internet
        – Has become a favorite browser for many developers, specially those with Web 2.0 in mind
        – Market share for FF makes it a browser that MUST be supported by any serios website/webapp
        Besides the fact that the lost in velocity in ff adoption it’s a reasonable fact if we consider that perhaps the IE market share is based mainly (not solely) in users that have a Windows based PC and will never know or will be interested in changing browsers.  
        As a person in charge of some websites, firefox was the first browser that make me change my “if it works for IE is ok”, know I have a “it must work on ff (mainly for crossplatform compatibility) and ie (for most used browser compatibility)”.
        So your “firefox is not as attractive to users as Moz. and NS were” should consider to be “Firefox is getting to those users who simply don’t care and are not looking for alternatives.  The ones that care, have made the move to FF 🙂

      • #3138448

        The Firefox Market Share Myth

        by justin james ·

        In reply to The Firefox Market Share Myth

        emromero –

        “I see a missunderstanding of “common knowledge” because I don’t see/read that many people telling FF will have the greatest market share in the near future, so your “analisis” is proven wrong something it is simply no-common-knowledge.”

        You may disagree with my perception of what “common knowledge” is, and as such, feel that there is no “Myth of Firefox Market Share” to disprove, but it does not discount my analysis at all.

        “Common knowledge for firefox knows:
        – FF has proven to eliminate IE’s virtual monopoly for browsers.
        – Has become a “mainstream browser” in Internet
        – Has become a favorite browser for many developers, specially those with Web 2.0 in mind
        – Market share for FF makes it a browser that MUST be supported by any serios website/webapp “

        I agree 100% in these statements.

        “Besides the fact that the lost in velocity in ff adoption it’s a reasonable fact if we consider that perhaps the IE market share is based mainly (not solely) in users that have a Windows based PC and will never know or will be interested in changing browsers.  “

        This does not agree with the evidence. There were non-IE alternatives well before FF. If you look at the numbers in the spreadsheet, you will see that the Gecko family had a good share of the market before FF was released; in fact, the Gecko family grew nearly as much in the 12 months prior to FF’s release than in the 21 months after FF was released.

        To put it simply, Firefox’s adoption rate is less than its predecessor’s was. In theory, there should be a certain percentage of users open to tryong a non-IE browser, even in the new PC market. Unless IE has suddenly started making users happy (and when you consider that IE has done nothing but fix bugs over the span of time since FF cme out, it did not add any new features) which is quite doubtful, at the very least, we should be seeing the adoption rates hold steady. Instead, they are dropping. There is still an overall shift of users from IE to FF, but it is not as strong as the pre-Firefox moves were.

        “As a person in charge of some websites, firefox was the first browser that make me change my “if it works for IE is ok”, know I have a “it must work on ff (mainly for crossplatform compatibility) and ie (for most used browser compatibility)”.
        So your “firefox is not as attractive to users as Moz. and NS were” should consider to be “Firefox is getting to those users who simply don’t care and are not looking for alternatives.  The ones that care, have made the move to FF :)”

        I think I’ve found where we agree completely, and your previous statement make more sense in light of this one. Where we agree is that there was a huge pool of users who were looking for a non-IE browser or already on one before FF’s release (after all, FF opened with 16.6% share, taken nearly 100% from Mozilla and Netscape users). While these people immediately jumped on FF (from Mozilla, Netwscape, etc.) as soon as it came out, FF is having a harder time gaining traction with casual and new users. It is not that is is “less attractive” than Mozilla or Netscape are, just that the core group was just about tapped out by the time FF was released, and that FF has provided additional growth where none was before.

        I can beleive and agree with that analysis as well. There have always been a core group of people who would use any browser (Opera, Mozilla, Netscape) as an alternative to IE. It is also important to note that my analysis does not take OS into account. Part of the IE-to-FF transfer could be cause by growth in the number of Linux or Mac users out there. However, Mac market share really isn’t increasing, and Linux desktop usage still is sitting on the launch pad, so I think OS would be a wash.

        I appreciate your feedback, it is an interesting angle to consider with a lot of merit.

        J.Ja

      • #3138404

        The Firefox Market Share Myth

        by mark miller ·

        In reply to The Firefox Market Share Myth

        My impression always was that FF was at around 12% total market share. So I’m confused. When it first came out I used to hear it was at about 3%, but it kept growing, eventually getting up to about 10% last year. So when you say it debuted at 16% that doesn’t jive with what I remember reading about it.

        Maybe I misread your article. Were you intending to guage developer interest, or user interest overall? When you’re looking at population statistics, you need to consider the demographics. According to W3Schools their statistics are from people who have visited their site. Given the subject matter, I would assume most people who visit there are developers and web designers, though they say they’ve checked their data with other sites that collect usage statistics and they think they show the same general trend. I would expect a higher statistic for FF with developers. It’s entirely possible that there’s a portion of the developer population that uses FF, just out of personal preference, but which create web apps. primarily for IE.

        W3Schools also had these disclaimers:

        “Browsers that count for less than 0.5% are not listed.

        W3Schools is a website for people with an interest for web technologies. These people are more interested in using alternative browsers than the average user. [my emphasis] The average user tends to use Internet Explorer, since it comes preinstalled with Windows. Most do not seek out other browsers.

        These facts indicate that the browser figures below are not 100% realistic. Other web sites have statistics showing that Internet Explorer is used by at least 80% of the users.”

        By looking at the latest data from W3Schools, they say:

        IE – 61.8%

        Gecko browsers – 30%

        Opera – 1.6%

        Total = 93.4%

        So they’re missing out on 6.6% of the market in their stats for some reason. Further, it does seem to confirm the “common knowledge”, at least in Gecko browsers. They show that FF makes up most of the 30%.

        If you check out this Wikipedia article, you’ll see that several surveys have been done on browser usage up through last summer, and they all show IE in the mid- to low- 80s, while Mozilla/FF is at around 13%. For most of the surveys, if you add up their numbers across the browsers you’ll see that they have almost 100% coverage as well (Web Side Story was the worst).

        None of this is to say that developers shouldn’t pay attention to FF/Mozilla. Any product that has a market share around 10% should be given serious consideration. I’m just questioning your source of data.

      • #3138356

        The Firefox Market Share Myth

        by justin james ·

        In reply to The Firefox Market Share Myth

        Mark –

        Questioning the source of data is indeed a good thing; as I say in the blog, if anyone else has a different set of numbers that they beleive are more appropriate, I would be delighted to re-run the numbers. Indeed, the spreadsheet is available for download now, so anyone can plug in their own numbers and see for themselves!

        While I agree that W3School’s numbers may carry a pro-Firefox tilt because of their intended audience, I think that would actually strengthen my arguement that Firefox is not exciting users quite as it is perceived to do so; after all, if Firefox didn’t encourage developers to dump IE at a faster pace than previous Geck browsers, what will?

        I definitely feel that there is a “glass ceiling” on Gecko market share, and that it is being rapidly approached. I may slice the numbers a bit more granularly soon, to see what gecko market share growth looked like in small segments of time before the Firefox release, to see if the decline in growth rates had actually started before the Firefox release; who knows, maybe they were rapidly dropping, and Firefox renewed interest in non-IE browsers?

        You are absolutely right about “Unknown” browsers; I would assume that a lot of that is Safari, various Linux browsers, and Gecko & Opera users who disabled browser sniffing, as well as search engines.

        I also have seen a number of market share surveys, and I have seen many that are in alignment with W3School’s; I have also seen some that do not agree with it. One of the reasons I chose the W3School’s number is for precisely the reason that they are in the camp that is more generous to Firefox. With larger numbers, trends are a bit magnified. It also helps to head off any arguements that I “cherry picked” numbers favorable to IE. Maybe that is a bad decision on my part, maybe it isn’t.

        But like I said, I am more than happy to post the results of these calculations for any dataset that any user suggests! I do appreciate the insightful feedback on this; we all know the famous quote about lies & statistics, but I do try to play my numbers as straight as possible. Before embarking on these numbers, I actually consulted someone with a Masters in Statistics and a Bachelors in Mathematics to ensure that my calculations would be best suited for this type of research. He told me that my initial methodolgy (taking isntaneous velocities and acceleration numbers), while useful and accurate, were actually too drilled-down to give much meaningful information that a “non math person” would be able to get much value from, and he suggest the methodolgy that I actually used.

        J.Ja

      • #3140311

        The Firefox Market Share Myth

        by emredondo ·

        In reply to The Firefox Market Share Myth

        I don’t care about the FF market share, i use Firefox because is better; in 2006, IE is only a piece of crap, very old and outdated.

      • #3140298

        The Firefox Market Share Myth

        by faisal_mcc ·

        In reply to The Firefox Market Share Myth

        It was a very good research. As a web developer I check W3Schools browser statistics every month. Personally, in most of the cases I like to use Opera. So, I test my projects in IE, FireFox & Opera also.

        About the Gecko-based browsers, I think Netscape was more attrative than FireFox. 

      • #3140297

        The Firefox Market Share Myth

        by roho ·

        In reply to The Firefox Market Share Myth

        I always find it’s like this with statistics: You can prove the world is round using statistics.

        An interesting approach at an writing article based on statistical data that is later agreed that it might not be completely relevant or related to the “mainstream” web users.

        I don’t think that Firefox will overthrow Internet Explorer in the near future but it has brought about a change in attitude in web designers (build for the best and test for the rest) and has also awakened a sleeping giant in Redmond. Anyway, Firefox has broughht about a shift in the spectrum, which, however small it may seem, has proven to make a difference.

      • #3140233

        The Firefox Market Share Myth

        by user.booted ·

        In reply to The Firefox Market Share Myth

        I knew it wasn’t doing that well. I say screw FF, get Opera, but I’d take FF over IE any day.

      • #3140218

        The Firefox Market Share Myth

        by emromero ·

        In reply to The Firefox Market Share Myth

        Thanks for your response, i’d like to add a comment, you said:

        Where we agree is that there was a huge pool of users who were looking
        for a non-IE browser or already on one before FF’s release (after all,
        FF opened with 16.6% share, taken nearly 100% from Mozilla and Netscape
        users). While these people immediately jumped on FF (from Mozilla,
        Netwscape, etc.) as soon as it came out,

        As Mark stated, the issue of FF appearing with 16% does not mean that from one month to another all 16%switched. For me, it means that this statistics consider FF users to be using Mozilla, so if you are taking all 2004 to be Mozilla only you are entering wrong data into consideration because your “before firefox” numbers are giving other browsers the credit Firefox slowly began taking during 2004. Having said that, I guess I have a point proving that not the source, but your interpretation of the data flaws the analisis. I think a well based analysis could only be done if we have access to raw log files from 2004 and start “reclasification” of browsers taking into consideration FF in 2004, wich W3 Scholl numbers did not.

      • #3140209

        The Firefox Market Share Myth

        by julie ·

        In reply to The Firefox Market Share Myth

        I am not a statistics expert but I use FF as my primary browser and IE
        only when a website tells me it doesn’t support my browser. With this
        in mind, could it be that some statistics are missing to accomodate the
        places that do not recognize FF? I understand that yours do but there
        are others out there trying to run numbers whose websites can’t be
        accessed or viewed using FF. Could this skew the overall picture at
        all?

      • #3140141

        The Firefox Market Share Myth

        by cvestal ·

        In reply to The Firefox Market Share Myth

        Wow, what a revelation! IE still has the biggest marketshare…

        You began by saying there must be a baseline. I agree, but your “analysis” is not based on much of a baseline. Taking one web site’s stats is not a very scientific method. I do not dispute the numbers, just your method is flawed. Your study is anecdotal.

        In essence, everyone’s browser experience has improved due to Firefox –  no matter what the numbers are. The numbers, real or perceived, were enough to spur Microsoft into improving IE. I am not an antagonist that hates Microsoft, in fact, I have recently begun retooling my web dev skills to be more Microsoft-centric by learning asp.net.

        Respectfully,

        CV

      • #3140126

        The Firefox Market Share Myth

        by justin james ·

        In reply to The Firefox Market Share Myth

        cvestal –

        “Wow, what a revelation! IE still has the biggest marketshare…”

        Whether or not IE still has the most marketshare was never the question. The fact remains, the numbers that I used show that IE’s market share slide has been less since the release of Firefox than before.

        As a few other readers have pointed out, the source for my numbers may be biased; if anything, they are tilted towards non-IE browsers! So I suppose you may be able to call the number “anecdoctal.”

        That being said, I once again reiterate the offer made in the original post: if you are aware of another source of this data that you beleive would be a more accurate source, let me know, and I will be delighted to re-run the numbers and publicly post the results.

        J.Ja

      • #3140124

        The Firefox Market Share Myth

        by jswalwell ·

        In reply to The Firefox Market Share Myth

        Thanks for your very informative article. It stimulated me to look deeper into Firefox. One item in their FAQ that may be of interest concerns the ‘official’ abbreviation:

        8. How do I capitalize Firefox? How do I abbreviate it?

        Only the first letter is capitalized (so it’s Firefox, not FireFox.) The preferred abbreviation is “Fx” or “fx”.

        I tried Fx at home last year and was not impressed with the much vaunted speed. It didn’t feel faster than IE but I will download the latest version and give it a second chance.

        Jack Swalwell (jswalwell@parker.com)

      • #3140114

        The Firefox Market Share Myth

        by justin james ·

        In reply to The Firefox Market Share Myth

        jswalwell –

        That was an interesting item about Firefox’s “official” abbreviation. “FF” is a lot like “Legos” (officially “Lego bricks”), regardless of what the creator wants, people will call it what they like best. I see “FF” a lot, but know that I know that there is an “official abbreviation” I will be sure to use it from here on out.

        Thanks!

        J.Ja

      • #3139996

        The Firefox Market Share Myth

        by www.cybertopcops.com ·

        In reply to The Firefox Market Share Myth

        Like many users said, it doesn’t matter what the statistics
        say, Firefox works better than IE and that is all that counts. True, there are
        millions of users who simply don’t know that there is an alternative to IE and
        not necessarily Firefox. They simply use what they are given. Who will, in a
        right state of mind, continue to use a bug-ridden browser with more
        security holes than a watering can? It is because people don’t know better. But
        this article is not about which one is better, it is about market share.

        We might have seen that the decline in IE market share and the gain in
        Firefox users might have been over inflated, but it will be interesting to see
        the trends when Vista gets released. With so many changes made to IE (some good and
        some unnecessary) we might see a spark in Firefox users because Firefox has
        more or less the look and feel of IE6, which makes it a better prospect for
        users who don’t like change. Now isn’t this a good thing that Microsoft made
        the IE7 upgrade compulsory?

        http://www.cybertopcops.com

      • #3141355

        The Firefox Market Share Myth

        by ret.miles ·

        In reply to The Firefox Market Share Myth

        I question the benefit of lumping Gecko browsers.  Bach and other Baroque composers developed rules of harmony which form the basis of music still written today, and pioneered a system of compromise keyboard tunings which work for any key.  Using the harmonic system developed during the Baroque, and using similar notation symbols, Frederic Chopin and Scott Joplin both composed music for the piano tuned using the tuning system also developed during the Baroque.  And they both played the piano using similar fingerings which usually rely on 10 fingers on two hands.

        Yet, it would be a mistake to lump Chopin and Joplin together just because they use the same building blocks / operating environment / constraints / tools / fingering and wrote for the same instrument.  Baroque harmonic rules and tunings do not equate to Chopin or Joplin.  Neither does the piano, nor the notational system, nor the ruled paper, nor the fingering.  

        By the same token, Gecko does not equate with Firefox anymore than DOS equates with Windows or Unix equates with Mac.  They are related, but each Gecko browser should be evaluated on its own merits.  Also, even though there are similarities between Chopin and Joplin, you would not normally have any but theoretical value in lumping them together statistically.  Similarly, each Gecko browser’s stats would also probably best be evaluated separately.

      • #3141326

        The Firefox Market Share Myth

        by Anonymous ·

        In reply to The Firefox Market Share Myth

        Whatever they say in the studies, I usually just look at my own web server logs and see what people are using, and Google Analytics, etc. It’s mostly IE. Firefox comes in a good second though.

        And too, you have to consider that Firefox has extensions (or plugins) that let a user change the way the browser identifies itself, and thus some may set it to identify itself as an IE-type browser for compatibilty to most sites they visit and they might not want to change it for different sites, but keep it set at IE-type. This can throw off a study too. Since one doesn’t know for sure that a person is using such an extension or not.

      • #3138926

        The Firefox Market Share Myth

        by i-lin ·

        In reply to The Firefox Market Share Myth

        You have to be more careful. Data analysis is more than just running a spreadsheet or a linear regression on a set of raw numbers. That kind of analysis might be acceptable for business analytics but not for scientific data analysis.

        Besides running the spreadsheet, you have to look at the pattern of numbers to see what kind of story they are telling you. For example, the large switchover in Dec 04 from Mozilla to Firefox indicates that a large number of Firefox users were misclassified as Mozilla users. This will bias the results against FF. An analyst might try to compensate for this by extrapolating the growth rate of FF back to Nov 4, the release date, and estimate a FF marketshare of 14.60% say for Nov and begin comparison then. As per your intuition, doing so wouldn’t change the numbers significantly, but it’s just sloppy analysis not to do it.

        A next step would be to ask why so many Mozilla users switched to FF in November. The answer to that is that probably a large number of them were already using a pre-release version of FF, and then just switched over to the official. This would again bias the results of your analysis against FF.

        Another thing that a data analyst would ask is the selectivity of the sample. People who visit w3schools are likely to be considerably more tech-savvy than people who visit yahoo.com, for instance. They are also more likely to be early adopters than the general populace. So the numbers from w3schools are likely to be higher number than browser usage in the wild, and to be higher early on.

        Anyway, I’m not necessarily disagreeing with the results of your analysis, just with the way in which it was conducted. 

    • #3138780

      Multhreading Tutorial – Part VII: Non-Atomic Performance

      by justin james ·

      In reply to Critical Thinking

      This is the seventh and final installment of a multi-part series demonstrating multithreading techniques and performance characteristics in VB.Net.

      This week’s post on multithreading takes us full circle, back to non atomic operations. Unlike the first post that tested performance, which performed them in a single thread, this one is multithreaded. Although it is multithreaded, and performs read/writes to shared variables, there is no thread safety whatsoever. As a result, in actual code where the contents shared variables are needed, they cannot be trusted. The variable that holds the number of completed computations is actually trustworthy, because all threads are doing the exact same thing, adding the number 1 to it. If it was doing something less predictable (such as multiplying by the iteration number or adding to a string of characters) the results would be chaotic at best. That is an important distinction to note: while this code is actually functional, in a real program it would probably not work, and definitely not work as expected!

      You can download an installer for this program here. The installer will also install the full source code for the project, in a subdirectory of the installation path. Feel free to try it out yourself and tinker with it or just to look at it to get a better understanding of multithreading techniques.

      This is the code that launches the threads:

      Public Sub NonAtomicMultiThreadComputation(ByVal Iterations As Integer, Optional ByVal ThreadCount As Integer = 0)
        Dim twNonAtomic As NonAtomicThreadWorker
        Dim IntegerIterationCounter As Integer
        Dim iOriginalMaxThreads As Integer
        Dim iOriginalMinThreads As Integer
        Dim iOriginalMaxIOThreads As Integer
        Dim iOriginalMinIOThreads As Integer

        twNonAtomic = New NonAtomicThreadWorker

        Threading.ThreadPool.GetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.GetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        If ThreadCount > 0 Then
          Threading.ThreadPool.SetMaxThreads(ThreadCount, ThreadCount)
          Threading.ThreadPool.SetMinThreads(ThreadCount, ThreadCount)
        End If

        For IntegerIterationCounter = 1 To Iterations
          Threading.ThreadPool.QueueUserWorkItem(AddressOf twNonAtomic.ThreadProc, Double.Parse(IntegerIterationCounter))
        Next

        While NonAtomicThreadWorker.IntegerCompletedComputations < Iterations

        End While

        Threading.ThreadPool.SetMaxThreads(iOriginalMaxThreads, iOriginalMaxIOThreads)
        Threading.ThreadPool.SetMinThreads(iOriginalMinThreads, iOriginalMinIOThreads)

        twNonAtomic = Nothing
        IntegerIterationCounter = Nothing
      End Sub

      And here is the code for the class itself that performs the work:

      Public Class NonAtomicThreadWorker
        Public Shared IntegerCompletedComputations As Integer = 0
        Private Shared DoubleStorage As Double

        Public Property Storage() As Double
          Get
            Return DoubleStorage
          End Get
          Set(ByVal value As Double)
            DoubleStorage = value
          End Set
        End Property

        Public Property CompletedComputations() As Integer
          Get
            Return IntegerCompletedComputations
          End Get
          Set(ByVal value As Integer)
            IntegerCompletedComputations = value
          End Set
        End Property

        Public Sub ThreadProc(ByVal StateObject As Object)
          Dim ttuComputation As ThreadTestUtilities

          ttuComputation = New ThreadTestUtilities

          Storage = ttuComputation.Compute(CDbl(StateObject))

          CompletedComputations += 1

          ttuComputation = Nothing
        End Sub

        Public Sub New()

        End Sub
      End Class

      Here are the results of our tests. All tests are for 1,000,000 iterations, and the results are in milliseconds per test run

      TEST 1

      This test allows the ThreadPool to manage the total number of minimum and maximum threads on its own:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      16578.125

      17296.875

      15359.375

      14453.125

      19265.625

      16590.625

      System B

      16296.666

      16296.666

      17562.275

      15859.172

      14140.444

      16031.045

      System C

      17328.347

      19140.870

      19625.251

      19531.500

      23125.296

      19750.253

      System D

      30250.194

      30140.818

      29531.439

      30078.318

      29500.189

      29900.192

      Average

       

       

       

       

       

      20568.029

      TEST 2

      In this test, we limit the maximum number of threads to one per logical processor:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      11046.875

      10796.875

      10968.750

      10906.250

      10843.750

      10912.500

      System B

      18624.762

      19796.622

      26874.656

      13359.204

      14577.938

      18646.636

      System C

      12234.532

      13390.796

      26000.333

      31641.030

      12656.412

      19184.621

      System D

      29468.939

      29297.063

      29468.939

      29500.189

      29406.438

      29428.314

      Average

       

       

       

       

       

      19543.018

      TEST 3

      This test uses only one thread:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      10812.500

      11078.125

      12265.625

      10781.250

      13296.875

      11646.875

      System B

      14749.811

      14906.059

      19718.498

      38577.631

      41999.462

      25990.292

      System C

      12812.664

      12609.536

      13453.297

      16078.331

      13234.544

      13637.674

      System D

      29406.438

      29484.564

      30234.569

      29375.188

      29468.939

      29593.940

      Average

       

       

       

       

       

      20217.195

      TEST 4

      This test uses two concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      12937.500

      13453.125

      14218.750

      15593.750

      13718.750

      13984.375

      System B

      54249.306

      30396.266

      30036.824

      21266.850

      18468.986

      30883.646

      System C

      19531.500

      17172.095

      19656.502

      18203.358

      22312.786

      19375.248

      System D

      29437.688

      29359.563

      29625.190

      29437.688

      29468.939

      29465.814

      Average

       

       

       

       

       

      23427.271

      TEST 5

      Here we show four concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      22468.750

      20437.500

      23703.125

      22828.125

      21203.125

      22128.125

      System B

      22719.041

      32422.290

      20484.637

      30828.520

      34328.564

      28156.610

      System C

      36047.336

      39578.632

      37469.230

      40000.512

      34203.563

      37459.855

      System D

      30297.069

      29359.563

      29343.938

      29312.688

      29359.563

      29534.564

      Average

       

       

       

       

       

      29319.789

      TEST 6

      This test uses eight concurrent threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      24906.250

      25046.875

      24250.000

      24812.500

      24734.375

      24750.000

      System B

      37453.604

      36453.125

      29078.125

      30562.500

      33890.625

      33487.596

      System C

      32891.046

      32266.038

      32969.172

      32516.041

      32766.044

      32681.668

      System D

      29406.438

      29422.063

      29375.188

      29390.813

      29422.063

      29403.313

      Average

       

       

       

       

       

      30080.644

      TEST 7

      Finally, this test runs 16 simultaneous threads:

       

      Test 1

      Test 2

      Test 3

      Test 4

      Test 5

      Average

      System A

      24937.500

      25125.000

      24859.375

      24546.875

      24734.375

      24840.625

      System B

      44749.427

      43311.946

      33749.568

      36218.286

      39358.871

      39477.620

      System C

      32937.922

      32719.169

      32953.547

      32734.794

      32812.920

      32831.670

      System D

      29468.939

      45390.916

      45765.918

      34687.722

      34578.346

      37978.368

      Average

       

       

       

       

       

      33782.071

      System A: AMD Sempron 3200 (1 logical x64 CPU), 1 GB RAM

      System B: AMD Athlon 3200+ (1 logical x64 CPU), 1 GB RAM

      System C: Intel Pentium 4 2.8 gHz (1 logical x86 CPU), 1 GB RAM

      System D: Two Intel Xeon 3.0 gHz (2 dual core, HyperThreaded CPUs providing 8 logical x64 CPUs), 2 GB RAM

      It is extremely important to understand the following information and disclaimers regarding these benchmark figures:

      They are not to be taken as absolute numbers. They are taken on real-world systems with real-world OS installations, not clean benchmark systems. They are not to be used as any concrete measure of relative CPU performance; they simply illustrate the different relative performance characteristics of different multithreading techniques on different numbers of logical CPUs, in order to show how different processors can perform differently with different techniques.

      As you can see, having no locks occurring results in significant performance gains over the tests that were performed with carious locking mechanisms. However, it is important to understand the ramifications of not using locking. If there is any type of data that needs to be shared amongst threads, locking will have to come into play. Judicial use of locking should prevent the performance hit from being too high. It is also important to note that for many situations, running your computations in a single thread will actually be faster than using multithreading; it all depends on your hardware and what you will actually be doing. As I have said before, “your mileage will vary.” Test, test, and test again to see what techniques work best for your particular application.

      We have come to the end of the end of this series. As always, feedback and comments are appreciated.

      J.Ja

    • #3280239

      Why is Buying Computer Hardware So Much Work?

      by justin james ·

      In reply to Critical Thinking

      Why is buying computer hardware so much work? To be sure, there are some components like power supplies, most peripherals, and so on that are pretty straightforward to choose. Even some of the more complex hardware, like hard drives and motherboards are pretty easy to select; you decide which features you need, and find which parts have those features. But selecting CPUs and video cards is nearly impossible.

      A long time ago, CPUs were easy to select; you just bought the highest clock cycle CPU you could find. You chose Intel because you did not have a choice (or much of one, at least; non-Intel CPUs tended to not be very good). Now, there are dozens of families of CPUs from more than one manufacturer (Intel, AMD, and more, if you count the less mainstream CPUs). Each family has its own benefits and drawbacks, and pure CPU speed is less of a determining factor for end game performance than it used to be. Would a dual core CPU at 2.0 gHz per core be better than a single core CPU at 3.0 gHz? It all depends on your needs. What about cache? Coprocessors? And so on. At the very moment, the choice is fairly easy: the Core 2 Duo (Conroe) CPUs from Intel are beating the competition, even when compared to much more expensive CPUs. But no one expects that to last, and the last month or so has been the first time in a while that CPU selection was fairly easy. But even when it has been confusing, the simple rule that never went wrong was to get as much cache, clock cycles, and cores as possible.

      Video cards, on the other hand, have never been simple, and probably never will be. This is what I have been stuck on for some time with my recent quest to upgrade and revamp my home network and computers. Picking out everything else was pretty painless, with the exception of deciding how much I wanted to spend on hard drives and if I preferred RAID 1 with two smaller drives to one smallish drive for data and a big drive for backups (I decided to go with RAID 1, it was cheaper and will be faster). But the video card is still killing me.

      I went to Tom’s Hardware Guide, and I will tell you, that site is positively worthless. I really do not see why people treat it like it is the source of all hardware information. It is not. I read the guide to video cards, and what the site calls a “about $200 video card” was listed for no less than $250; indeed, even on the site’s links to vendors, they were listing that “about $200 video card” for “about $300”! The recommendations could not even be found at half of the sites that I prefer to shop at.

      I think video cards are a hoax at this point; someone wants me to give up and just spend whatever my maximum budget is, and hope that I get the best card possible. It is ridiculous.

      Buying computer hardware gives me severe heartburn. Following hardware trends is an exercise in futility, as whatever I learn today is useless tomorrow. There are no good ways to directly compare too many components, and all too often my choice is to pay for features I do not want, or not get features I do want to remain within budget. Overall, the process kills me and I wish it was easier.

      J.Ja

      • #3280183

        Why is Buying Computer Hardware So Much Work?

        by dawgit ·

        In reply to Why is Buying Computer Hardware So Much Work?

        Welcome to the “Consumer” driven world.  It gets worst.  It’s also why it’s important to have a local computer & electronic shop that you can trust, on your list of supliers. (As in a real shop)  Every thing in the so called market is the ‘lastest, graetest, & fastest’, even if the people acually selling that particular item are clueless as what it even does.  And one can forget about finding replacement items. (it’s all history, even if it’s only in the last 3 or 4 months)  

      • #3280155

        Why is Buying Computer Hardware So Much Work?

        by justin james ·

        In reply to Why is Buying Computer Hardware So Much Work?

        dawgit –

        When I lived in New Jersey, good, local computer shops were all over the place. Unfortunately, here in Columbia South Carolina, they are a rarity. While I always expect to pay more for parts local, it should be evened out (or almost evened out) by not having to pay shipping; around here, parts are a full 25%+ more expensive than they are from Newegg and other similar, credible online retailers. The only thing I buy local now are large items like cases and things where a 25% markup is only a few dollars, like USB hubs and NICs.

        But you are right about the consumer culture end of things. I really need to get around to reading The Paradox of Choice. It is just amazing to me how many nearly identical choices there are out there, and I always feel like I made a mistake afterwards. A few months ago, I bought equipment at my job. A day after the two dual Xeons arrived, Intel slashed the price on them because the Core 2 Duo had been released. Oh well. The only thing that kept me sane on that one was that it was not my money, at least.

        J.Ja

Viewing 85 reply threads