General discussion

  • Creator
    Topic
  • #2187484

    bITs and blogs

    Locked

    by apotheon ·

    blog root

All Comments

  • Author
    Replies
    • #3242745

      on blogs and IT

      by apotheon ·

      In reply to bITs and blogs

      It’s a dangerous thing to run a blog about one’s IT career. It’s the technically proficient who are most likely to read a website like this, and who are most likely to read a tech-related blog, and it’s the technically proficient on whom I’m most likely to want to make a good impression as my career advances. Blogs are a good way to make bad impressions on people you’ve never even met.

      Blogs are self-conscious things. They assume an audience, and without that audience there’s no point to them. Sure, you might think of the idea of a private journal or diary, but blogs are not that at all: they’re very public things. If you’re reading one that isn’t your own, you’ll find it difficult to justify such a thing on purely private grounds without lying to yourself in the process. As such, you should expect that every blog you read is written with its audience in mind. Things will always be censored to some degree, in some way, or will be made intentionally shocking for the “benefit” of the audience. You’ll see a lot of passive-aggressive behavior in blogs, a lot of pity-seeking, a lot of trend-chasing and a lot of trend-resisting, and a lot of attempts to spread ideas and opinions to infect the minds of others. This blog is no exception. Keep that in mind whenever you read what I have to say about someone else here, and keep that in mind whenever you read anyone else’s blog entries, too.

      Blogs can be great advertising. People link to things they want other people to see. Sometimes they use those links to advertise themselves by making their own blogs seem interesting by way of what they link. Sometimes they advertise things they support or other online endeavors of theirs by linking to them. Often, they do both at once. It’s typical to see people throwing gratuitous links to things into their blogs to drive up traffic and to show how cool they are, and also to drive traffic to whatever it is they’re linking. What that will sometimes translate to in TR blogs is links to businesses that TR members run or that employ them, in hopes of increasing business. I’ll do that too, right now, just to get it out of the way. Here’s one of my employers:

      Wikimedia Foundation

      Here’s what you’ve probably already seen of the Wikimedia Foundation:

      Wikipedia

      As of this writing, I’m a thirtyish professional computer geek: I work both as a datacenter technician (for the above-linked nonprofit organization) and as an IT consultant (as an employee of a consultancy, mostly doing 1. web development and 2. unix/linux systems implementation and management). I’m an ethical theorist with very strong and well thought out political opinions, and that spills over into my analyses of IT industry trends quite a bit. I’m an INTJ, a “synthesist”, the focus of whose strengths is apparently in qualitative analysis, according to the personality tests. I’m also “smart” and “lazy”, according to Rommel’s leadership metrics.

      Perhaps I should explain that: Field Marshal Erwin Rommel, despite taking orders from a bunch of murderous, genocidal Nazis (but I repeat myself), was a brilliant military commander in World War Two. He has made interesting statements about the desirable qualities of leaders, and if he wasn’t hampered by basically insane superiors (Hitler, in short), he may well have won the Second World War for Germany. I’ll just directly quote what he said about how he selected leaders within his command.

      “Men are basically smart or dumb and lazy or ambitious. The dumb and ambitious ones are dangerous and I get rid of them. The dumb and lazy ones I give mundane duties. The smart ambitious ones I put on my staff. The smart and lazy ones I make my commanders.”

      Enough about me. What do you think of me?

      In case you’re interested, you can look up Field Marshal Rommel at Wikipedia.

      • #3237905

        on blogs and IT

        by oz_media ·

        In reply to on blogs and IT

        Good read in fact even as I am trying to move away from IT again, I know I will become more involved in the fall. So you just may keep my interest in IT alive.

        As for looking up Erwin Johannes Eugen Rommel, I always thought you were a we bit younger that, but it is a good picture if it’s recent.

        And you’re right in saying that he was a brilliant commander, I’ve seen several biographies and history recreations that accredit him well. There are a lot of things Germany did that were very clever, yet they are often looked at as being so inferior, I feel a great deal of that actually came later in the war when hitler had deteriorated mentally from his STD’s.

      • #3236110

        on blogs and IT

        by bob starr ·

        In reply to on blogs and IT

        Every so often a post comes along that represents a real paradigm shift. This is one such post. I never real thought about the impact of a “simple” blog turning on you from a career point of view. I’ll have to watch what I write from now on…oh, man, I hope this doesn’t sound too stupid..darn first impressions. Thanks for the terrific post!

      • #3236075

        on blogs and IT

        by av . ·

        In reply to on blogs and IT

        I think it took alot of guts to be the first one to take the step into blogging on TR. After all, you are putting a more personal side of you out for all to see; it makes you very vulnerable. Its different than posting to the TR forum because that only shows your view on that particular subject.

        I’m not sure what to think of blogging yet as a medium. Its a great advertising technique I have to agree and it also offers anyone anywhere their 15 minutes of fame. I’ve been looking at alot of blogs lately because I’m working on one for work and the sad thing is that many of them have no comments to their daily posts. They all work so hard too. I still don’t think that makes them useless though. Its actually a good outlet for the person involved and it does give you a web presence all your own. A blog is truly a modern day diary about what you thought about on that day.

        The Desert Fox is an interesting choice to mention. You must be a history buff. You have also compared yourself to the smart and lazy ones that became commanders. Thats good. I don’t think I’d say I was one of the dumb and lazy ones ever in my blog; though I don’t know, smartness doesn’t guarantee popularity.

        Ok, tell me, what is an INTJ?

      • #3242590

        on blogs and IT

        by dc_guy ·

        In reply to on blogs and IT

        OK American Voter, apparently you didn’t follow the links to the Jungian paradigm of the human spirit and you’ve never done a Myers-Briggs personality profile. None of this will make sense to a Freudian, much less to someone who isn’t interested in psychology as a way of studying the human spirit. So here’s the decoding of INTJ, made as short as possible.

        I: introverted, not extroverted. Jungian use of the terms, basically an extrovert is slowly energized by dealing with ANY other people whereas an introvert is slowly drained.

        N: guided by intuition, not senses.

        T: processes using thoughts, not feelings.

        J: trusts judgment, not perception.

        Any three of these vectors occurring together are the obvious formula for the anti-“people person.” A geek, someone who lives internally and keeps his own counsel. All four of them together are the classic IT professional, “Give me a computer any day, I’ll never understand people.”

        I was a textbook INTJ, with four strong vectors, the first time I took this test about 25 years ago. Over the years I’ve mellowed. Now my N and J are less strong, and I’m right on the borders between I and E and between J and P.

        There’s hope for geeks.

        BTW Joseph Campbell was a magnificent popularizer of Jung’s work. He worked heavily with “archetypes,” images that occur in our “collective unconscious,” e.g. in nearly all cultures and eras. You can transpose a discussion in Jungian jargon into one involving Greek gods or Shakespeare’s characters to make it more accessible.

        The concept of the archetype itself is wonderfully accessible. To a pragmatist it’s a preprogrammed synapse that happens to be included in our brains because sometimes nature is random. To an evolutionary biologist it’s an instinct that at one time was a survival trait so the individuals who had it were around to pass down their genes. To a spiritualist it’s a legend that was breathed into us by the Goddess on our way down the birth canal.

      • #3235692

        on blogs and IT

        by av . ·

        In reply to on blogs and IT

        Actually, DC_GUY, I didn’t see any links to explain INTJ in the post. I have never taken a Myers-Briggs test though I do know what they are about. Many years ago I worked in a contract position for an outplacement agency for executives. They relied heavily on those tests for placement purposes.

        Thanks for your explanation, it was very in a nutshell. I’m glad to hear that you’ve mellowed over the years. I think you should rephrase that and say you’ve grown. Geeks today have to have it all.

      • #3176267

        on blogs and IT

        by jmgarvin ·

        In reply to on blogs and IT

        Nobody reads my blog, so it is a diary. 😉

        I tend to post mostly dry howto stuff, but I do vent on
        occassion.  I also tend to think that if a company doesn’t want me
        based on what I’ve blogged, than they probably would have fired me
        after a while anyway.

        Generally, what I post and what I blog is how I am in the meat
        world.  I am a pretty straight forward person and I like to joke a
        bit, get a little serious, and discuss technical things (aka geek
        out).  I also think you hit the nail on the head.  I tend to
        fall into all 4 categories at any given time.  To be honest, I
        couldn’t rate myself at all given what Rommel describes people as…

        Apotheon you fall into the smart category for sure.  However, I
        don’t think I know you as a worker, so I don’t know where you fall in
        the lazy or ambitious categories.  From the sound of things (how
        you post, what you say, and your web presence) you are smart and
        ambitious.

        Excellent post.  I quite enjoy your blog…

    • #3237892

      wanted poster

      by apotheon ·

      In reply to bITs and blogs

      Well. Oz_Media has created a wanted poster inspired by my own subtly personalized TechRepublic user icon. I might as well commemorate and chronicle the occasion here, as my first non-introductory TRB post. Here ’tis:

      Wanted: Apotheon

      Feel free to have a good chuckle. I know I did.

      Oz_Media has passed copyright for the image to me (TR logo included under fair use), and I hereby provide it to the public under terms of CCD CopyWrite with fair use restrictions in place as regards the TR logo.

      • #3236768

        wanted poster

        by apotheon ·

        In reply to wanted poster

        this is a test
        this is only a test

    • #3237796

      Destroying Tokyo

      by apotheon ·

      In reply to bITs and blogs

      So this person named Kat said something about Godzilla, and that inspired this post. She wants everyone to “know” that this was all her idea.

      Yeah, right.

      _______

      vs.

      _______

      That’s right, folks, Tokyo is in trouble again. An epic battle between titanic monsters looms, and untold destruction will result. It’s Mozilla Versus Microsoftra!

      Sadly, that’s all I’ve got. I’m lacking in originality right now. I rather imagine a big red dinosaur stepping on some pathetic little multicolored butterfly/moth thing, then going on about its day like nothing happened. Could be fun. As I was composing this, doing searches for icons and so on, I did make the k3wlest monster noises, though. You really missed out on quite a show here. The other four people here (one of them about two years old), on the other hand, were much amused. Well, maybe frightened. It’s hard to say.

      Roar.

    • #3238774

      Washington outlaws Windows

      by apotheon ·

      In reply to bITs and blogs

      The Washington State House and Senate passed Engrossed Substitute House Bill 1012. This means that spyware is now illegal in the great State of Washington, home of Microsoft (in Redmond, Washington). In fact, Microsoft helped support the passage of the bill, along with eBay.

      Does anyone else here see the irony of this? Here’s Microsoft, purveyor of some of the worst spyware on the planet (especially by the broad standards of Bill 1012), supporting the passage of a law in its own home state that will effectively make a criminal organization of it.

      Consider what you know of Microsft’s planned “Black Box” functionality for Longhorn, its inclusion of spyware-riddled software such as MSN Messenger and Windows Media Player in Windows already, and its recent use of Windows Update functionality to remove competing products from end-users’ computers. Now read this description of what is defined as “spyware” according to the new Washington law:

      software that opens “multiple, sequential, stand-alone advertisements in the owner or operator’s internet browser”, logs keystrokes, takes over control of the computer, modifies its security settings, falsely represents “that computer software has been disabled”, or prevents “through intentionally deceptive means, an owner or operator’s reasonable efforts to block the installation or execution of, or to disable, computer software by causing the software that the owner or operator has properly removed or disabled automatically to reinstall or reactivate on the computer”.

      The only question now is whether the EFF and ACLU will get together and sue the crap out of Microsoft as they should. One can only hope.

      • #3242598

        Washington outlaws Windows

        by dc_guy ·

        In reply to Washington outlaws Windows

        What are the details of the law? It’s easy for legislators to sit in their chamber and draft rules, but it’s much harder to apply them to real life. I’d be surprised if there was language in the law specific enough to prosecute the people who threw together the software that enables the invasion of spyware.

    • #3232167

      Now Playing: Slack Nerdeth – MMORPGs

      by apotheon ·

      In reply to bITs and blogs

      I’m not much of a fan of MMORPGs. They just seem like a tremendous waste of time to me. They’re misnamed: they should be called MMOAGs, or something, for “Adventure” rather than “Roleplaying”. Computer roleplaying games of any kind are just roleplaying games without the roleplaying.

      To understand my perspective, maybe it would help to understand that I started playing real, pencil-and-paper RPGs in the early-to-mid-eighties. I know what “roleplaying” really means in the abbreviation RPG. There is a lot missing from computer “RPG”s of any kind that we just don’t have the technology to recreate from “real” RPGs using computers, yet. What’s missing is what makes an RPG into an RPG rather than a “Choose Your Own Adventure” book.

      I know some people that play MMORPGs. That’s fine. To each their own. I’m not actually offended by MMORPGs, or other computer RPGs, at all. I just find them to be silly, pointless, and wastes of time. I have hobbies that are silly, pointless, and wastes of time too, probably.

      Maybe. Well, maybe not, but I don’t necessarily see such hobbies as being “bad”. The fact that I’m very, very good at constructing arguments and recognizing spin on neutral facts makes it easy for me to chide friends for playing MMORPGs, though, so I do. It’s fun. I guess that’s one of my silly, pointless hobbies that wastes time. That’s okay. They do the same to me about some of my own hobbies. I cheerfully accept chiding for being such a Linux geek. Such is life. Recently, though, I found myself singing entirely the wrong lyrics to the tune of Black Sabbath’s “War Pigs”. I’ll post them here for your enjoyment. The following is released under CCD CopyWrite, just like everything else in this blog.

      Dorks are gathered in their masses
      Just like methods in their classes
      Newbies desperate for instruction
      In skills of character construction
      Ships and victims bleeding, burning
      Worlds of Warcraft keep on turning
      City of Heroes wasting time
      Poisoning their brainwashed minds
      Jolt Cola!

      Computer game geek hide away
      From the burning light of day
      Why go out and get real jobs?
      They leave that to dads and moms

      Time slips by, more power-ups
      Adding gemstones to their hoards
      Collecting golden coins and cups
      Playing GUI games without boards
      Save!

      The world outside could just stop turning
      I’d never know my house was burning
      Now the MMORPGs have the power
      Evercrack has stolen hours
      I’m selling characters to newbies
      And buying magic swords and XPs
      The swivel chair I’m sitting in
      Is adhering to my skin
      /pizza!

      • #3238392

        Now Playing: Slack Nerdeth – MMORPGs

        by Jay Garmon ·

        In reply to Now Playing: Slack Nerdeth – MMORPGs

        I think Scoot Kurtz said it best with this classic PVP comic.

        I actually excised video games from my life after nearly losing a whole term in college to the original PC X-Wing simulator (damn trench run mission) and Sid Meier’s Civilization. However, I still avidly roleplay via tabletop, and I love tabletop startegy games (Risk 2210 AD, for example).

        What I’ve come to realize about live roleplaying and gaming as opposed to online roleplaying and gaming is that the in person experience is inavariably richer and more memorable. It isn’t as convenient as the instant gratification of online games, but you get more out of it.

        The same was true fo TechRepublic’s live Roadshow events–people talked and exchanged ideas and solution in person more enthusiastically and efficiently than they ever would have online. People are build to work with other people, and while online games are getting more intuitive and more complex, they can’t outdo the innate human disposition to communicate, and I doubt they ever will.

    • #3260183

      posting comments

      by apotheon ·

      In reply to bITs and blogs

      I’ve been sorta kicking around
      inside my own head over the subject of comments in the TR blogs. I’m
      used to threaded comments for blogs, where you can reply to a comment
      with a comment of your own directly, rather than your comment simply
      being tacked onto the end of the list of “root” level comments as it is
      here at TR.

      I’ve been thinking about the advantages and
      disadvantages of a threaded comments approach, and I’ve realized that
      it’s nice, for a chance, to have a blog that does not have that
      feature. For one thing, it reduces the conversational tone of the blog
      to a certain extent, which changes the entire character of the blog.
      This way, it’s more geared toward article-like blog posts and comments
      directed at the blog’s author, rather than becoming sort of a social
      circle centered around that author. In short, it’s a little more
      “professional” this way.

      I’ll be taking advantage of this
      situation as an excuse to order my blog in a particular, probably
      abnormal, manner: I won’t post comments in it, ever. If I have
      something important enough to say that is inspired by a comment, it
      will be said in a blog article. This should help cut down on the level
      of fluff, among other benefits, and will probably cut down on flame
      wars as well, since the ability to answer back in anger will be
      severely curtailed (in part by my nonparticipation, and in part by the
      lack of threading in comments).

      As for the frequency of posts,
      it will certainly come in crests and troughs, with multiple posts over
      a day or two and several days of silence. I’ll post when I feel like
      it, and when I have something interesting to say. I won’t post random
      crap just to ensure a steady flow of words. When you read something
      here, it’ll be because I thought what I posted was worth saying,
      whether or not it’s worth reading.

      I’ll likely post at minimum
      once a week, at least for a while, though my online activity level in
      any given forum rises and falls in irregular cycles, generally. It’s
      not like I’m getting paid for this, after all.

    • #3260179

      What’s INTJ, and what has it got to do with IT?

      by apotheon ·

      In reply to bITs and blogs

      As TR member AmericanVoter reminded me in a comment to a previous post, not everyone in the world knows about the Jung Typology and the derived MBTI personality tests.

      Famed psychologist Carl Jung developed a theory of psychology types, known generally as Jung Typology. This begins with some general statements about how people’s personalities can be described by classifying them according to categories of psychological characteristics. For instance, most of the rambling that goes on in common parlance about “introverts” and “extroverts” arises from Jung’s uses of the terms, though most people misuse them terribly. Also important in Jung Typology are four modes of experience that bear the labels Thought, Feeling, Sensation, and Intuition. These have been grouped in opposed pairs, where Thought and Feeling are opposite sides of one coin, and where Sensation and Intuition are opposite sides of another. To these three (including Introversion/Extroversion) was another pairing added, consisting of Judging and Perceiving.

      These four opposed pairs are assembled into a letter-code personality classification that makes up the basis of the Myers-Briggs Type Indicator, which is in turn used as the set of metrics on which many personality tests are designed. I’ve taken a few, and I tend to vary somewhat between two results: I’m always either an INTJ or an INTP, depending on the test and, I imagine, my mood and current personality traits. I score INTJ rather more often than INTP, and as such tend to occasionally refer to myself as an INTj, indicating a less strong attachment to the J than the other three letters in that label.

      If you’ve been paying attention, you might by now have guessed that INTJ means I’m Introverted, Intuitive, Thinking, and Judging, primarily (though the Judging might by some measures and at some times actually be replaced by Perceiving). The INT personality types are the analytical types, and among INTs the J-types are more the qualitative variety and P-types are more the quantitative variety. In other words, INTJs are big on analyzing for value, and INTPs are big on analyzing for details. I do quite a bit of each, but for me the details of a thing are (usually) primarily useful as a means to the end, that being evaluation.

      Being an INTJ suits me well as a consultant. So, too, would INTP. While INTP is probably best suited to technology implementation once all the major decisions are made, INTJ is probably best suited to making those decisions, and in advising others on the making of such decisions. In addition to having the personality type suited to that kind of work, I’m also rather intelligent. No false modesty here: I know my limitations, and to some degree I also know my strengths, and don’t feel like pretending they don’t exist for humility’s sake.

      INTs tend to make the best programmers, and Js and Ps each have their own strengths within that. They’re synthesists, good at taking a series of preexisting parts and assembling them into something new and useful. They’re not, however, usually the best salesmen, and that failing is one that I suffer quite notably. This is one reason I work better for a consultancy than as a consultant on my own: I’m no extrovert, which is where real sales talent lies. I rather suspect that the best salesmen are ENTJs, and that the best chairmen of the board are ENTPs, but don’t quote me on that (without including disclaimers that it’s just wild speculation on my part).

      There is some speculation about the actual validity of the MBTI tests, including the official MBTI test whose trademark is owned by a trust that exists for that purpose, if I’m not mistaken. There are a great many copy-cat tests out there, however, which seem to be able to get away with it by virtue of the fact that the MBTI itself is derived from the theories of Carl Jung. These copycat tests are of varying worth, and when I say that I’ve taken many MBTI-based tests, I don’t know how many were copycats and how many were officially sanctioned derivatives of the MBTI itself (though I know that at least one was just a copycat, and am not at all sure that any were “official” MBTI tests).

      Here’s an MBTI-like test for you:

      Human Metrics Jung Typology Test

      • #3180426

        What

        by jmgarvin ·

        In reply to What’s INTJ, and what has it got to do with IT?

        Thanks for the test. I used to be an INTP, but now I am an INTJ. 
        An intersting switch I’d say.  While I don’t think the INT will
        change, I am surprised by the switch from P to J.  I would guess
        it is because I’ve become far more cynical and jaded than I was 10
        years ago when I first took the test 😉

        Thanks!

      • #3115530

        What’s INTJ, and what has it got to do with IT?

        by ruisert ·

        In reply to What’s INTJ, and what has it got to do with IT?

        I’m an INTP, and was first introduced to MBTI by my aunt, who was at the time Dean of Psychology at a major uni. It really helped finding out it was ok to be what I was, since INTP’s and INTJ’s each make up only about 1% of the total population. There’s a lot of pressure to conform to the norm, as extraverts make up about 75% of the population. Interesting that the writer and the two responses so far are all from these two groups.

    • #3239162

      Is anyone buying this?

      by apotheon ·

      In reply to bITs and blogs

      Wipro is a technology consulting
      firm that benefits from a “strategic alliance” with Microsoft,
      including “building financial models and ROI calculators for Microsoft
      product deployments” and “co-product development and engineering of
      Microsoft products”. They’re about as deeply enmeshed with Microsoft as
      a company can be without actually being owned outright. Wipro is even
      engaged in “managing call center operations for Microsoft product
      support” and “.NET evangelism with focus on enterprise and mobility
      applications”. In other words, Wipro’s business model is tightly tied
      to developing, maintaining, and marketing Microsoft product lines and
      services. This is anything but independent. For more detail, check out
      the Wipro Technologies website: Wipro ? Microsoft: a strategic relationship

      Recently,
      Microsoft commissioned a study of patch management costs, comparing
      Windows platform patch management with open source platform patch
      management. Considering the above explanation of the relationship of
      Wipro to Microsoft, and the fact that Microsoft commissioned the study
      and reported the results, I suspect you can guess whether or not the
      study favored open source software for patch management costs. Once
      again, a Microsoft-commissioned study performed by a company that is
      little more than a Microsoft lackey shows Microsoft undercutting the
      cost of software that can be had for free. Go figure.

      The study
      results, in a little more detail, indicated that the patching costs
      were very competitive between the Microsoft platform and the open
      source software platform. It was admitted in the study that Microsoft
      systems required more patching, but this is supposedly balanced by the
      fact that the per-unit cost of patching for Microsoft systems was
      lower. Of course, it’s also claimed that patching is easier for Windows
      systems ? a subjective, qualitative statement that cannot really be
      statistically demonstrated in a study, and can thus be safely spouted
      without worrying about the facts disproving the claim. When pinned down
      on such issues, of course, Microsofties will always fall back on the
      old “Windows GUIs are better!” argument. In my own experience, a simple
      shell or Perl script handling sorting and distribution of apt patching
      on Debian systems (for example) is about as easy as it can possibly get.

      This
      study, of course, actually compared corporations that use third-party
      patch management software for organization and distribution of software
      patches. These third-party solutions cost money to use, of course.
      Interestingly, the per-unit per-patch cost that showed Microsoft’s
      stuff being cheaper used the same back-ends as the open source
      software. Why is the per-patch per-unit cost lower? Simple: more
      Windows systems were required for performing to similar standards.
      Since the back-end cost was for the network, and not for the individual
      units, where Windows systems require more computers in place the
      per-unit cost involves a smaller division of the total cost over the
      larger number of units, leading to a lower “per-unit” component of the
      cost. Add to that the fact that patches are more numerous for Windows
      systems than for Linux systems, and the “per-patch” component of your
      cost is lower as well, because of similar division of cost between
      instances. Considering that the per-patch cost division occurs on each
      individual unit, that means that your total manipulation of cost
      figures to produce “per-patch per-unit” costs involves dividing the
      total cost by a number reached by multiplying the number of units by
      the number of patches. Thus, the total cost of your solution is likely
      more because of greater labor overhead (longer hours spent in
      patch-management), but since that total cost for your network is
      divided into such small numbers when divided by number of patches and
      number of units, your per-patch per-unit cost can be significantly
      lower. Thus, cleverly massaged statistics prove the sky is orange.

      This would be why such
      Microsoft-sponsored studies always examine per-unit costs for
      distributed bulk services: multi-unit
      services that only cost once can be whittled down when you split them
      up over a larger number of systems, since a larger number is often
      needed to achieve the same end results. Go figure. This turns the
      real-world increase in financial and management resource expenditure
      for a full-network solution into a decrease in resource expenditure for
      each individual unit. So much the better for your study results if you
      have to perform the same tasks over and over again, allowing a decrease
      in expenditure for each iteration, though the total cost climbs.

      Ultimately, none of this is all that
      relevant to real-world costs. Real-world costs for actual patch
      management and deployment are almost nothing for a knowledgeable
      admin, regardless of the platform. What costs money and man-hours is
      pre-deployment patch testing and post-deployment rollbacks and recovery
      when patching goes awry ? and, of course, downtime from both
      patch-related complications and the patching procedure itself. These
      are, to the people who run these studies on the Microsoft paycheck,
      irrelevant ephemera, but for those of us in the IT trenches they are
      the meat and potatoes of the almighty TCO (total cost of ownership) and
      administrative overhead (how much work and stress is involved in being
      a system or network administrator).

      Microsoft’s track record in
      post-deployment complications for new patches is legendary, and not
      positive. The fact that most statistical analyses of system failures
      during XP Service Pack 2 deployments performed by IT professionals were
      in the range of 10-15% is just astounding. This sort of astronomically
      high system failure rate after a patch purported to increase system
      security and stability is simply unacceptable for most production
      environments, and drives post-deployment recovery costs through the
      roof for many administrators. Smart admins see a resulting increase in
      pre-deployment patch testing because it simply becomes that much more
      critical that patches are fully tested for potential problems before
      deployment.

      Then, of course, there’s the matter of planned
      downtime. Unplanned downtime is of course an incredible, and often
      disastrous, addition to total cost of ownership for a given platform.
      Planned downtime doesn’t compare in terms of cost increases in most
      environments. There’s still an associated cost, though: even if
      business isn’t damaged overtly by planned downtime on the revenue side,
      downtime of any kind tend to involve increased costs where such
      concerns as the salaries of admins and contractors are concerned.

      Almost
      every single important patch applied to a Windows system requires a
      system restart. The only time a system restart is required for a unix
      system (such as Linux) patch is applied is when it’s a kernel patch.
      One of the major reasons for this is the system configuration scheme of
      Windows, where all services are configured through a single flat-file
      database. This is, to say the least, suboptimal for system uptime.

      Time for some anecdotal evidence:

      I’ve
      never, in all my work as a consultant, had to recover from a Linux
      system patch. Not once. All Linux system patches and upgrades worked
      flawlessly for me. On the other hand, I literally paid my bills for a
      while on Windows patch recovery when clients started applying SP2 to
      their Windows XP systems. Linux systems used by the same clients hummed
      along, undisturbed.

      While I didn’t service the Server 2003
      systems that developed major problems with the deployment of SP1, I
      know that similarly impressive spikes in the consultancy’s revenue
      stream occurred with that patch as well.

      On top of all that, I’m
      still waiting to hear about a number of patches for things that have
      been languishing on Microsoft’s back burner for far too long. I haven’t
      heard of a needed patch for the Linux systems I support that has yet
      taken more than a few days to appear after the need was discovered. I
      can only thank my lucky stars that my duties with the consultancy are
      moving more and more into Linux and web development, and farther from
      Windows system support. The stress levels have dropped considerably. I
      get to design and implement, and spend less time fixing.

      With
      all of that in mind, and casting a scornful glance back at the
      chicanery of Microsoft and Wipro Technologies, I have to ask:

      Is anyone buying this?

      • #3180378

        Is anyone buying this?

        by jmgarvin ·

        In reply to Is anyone buying this?

        The sad truth?  Yes!  CFOs, CEO, CIOs and COOs are buying
        into the MS hype and FUD machne at an alarming rate!  While many
        smaller shops are moving to Linux, most larger corporations don’t seem
        to get it.  The CFOs seem to think that MS products are cheaper,
        the CIOs seem to think they are better for their IT staff and users,
        and the COOs buy into the marketing hype. 

        MS claims their patch managment is top of the line (it isn’t) and very
        easy to deploy (it isn’t).  MS also claims that Linux costs far
        more than their bloated POS.

        I’m a Red Hat kinda Linux user, but I feel at home in any flavor. 
        Why?  Because across all distributions there are MANY
        commonalities that just don’t exist in the MS family.  MS needs to
        get its act together or more and more home and business users will just
        drop it and more to either *nix or OS X (ya…I know it is BSD, but it
        has its own special flavor)

    • #3180368

      from the beginning

      by apotheon ·

      In reply to bITs and blogs

      I’m going to do Something Different here. I’m going to try to be
      informative. I’ll probably be mostly informative about stuff relating
      to Linux in some way. Since a lot of people, even among IT
      professionals, don’t know a lot of the fundamentals that comprise a
      good basis for understanding Linux-related stuff, I figure I should
      probably start with some of those fundamentals. See how helpful I am?

      First off, Linux is unix, but not UNIX, or even Unix. At least,
      that’s how I refer to the various states of unix-compatibility. See,
      UNIX is a trademark that is assigned to a unix when it passes certain
      qualifications and when someone pays for the privilege of using the
      UNIX name; Unix is basically a non-term that I usually don’t use, but
      when I do differentiate between Unix and either UNIX or unix, what I
      mean is that Unix has certain characteristics relating to a family of
      related OSes that are all descended from the same ancestor, and contain
      (some of) the same code as that ancestor, but isn’t necessarily UNIX
      because nobody bothered to get it certified. The various *BSD unices
      (that’s plural for unix) qualify as Unix by this standard, because
      almost every single unix in existence has a core of *BSD-based code in
      it. Even the original UNIX line, developed initially at Bell Labs,
      contains a lot of *BSD code because *BSD code is open source but not
      copyleft, meaning that anyone can see the source code but it can be
      incorporated in closed-source software without any legal issues. I’ll
      address the various terms of software licensing in a moment, but first
      I’ll finally mention what makes an OS qualify as unix.

      Linux (pronounced “linn-ucks” or “leen-ooks”) is a unix. It is not,
      however, a Unix or a UNIX. It contains original code unrelated to the
      core *BSD code, and though I’m pretty sure it would qualify, nobody has
      ever gotten it certified as UNIX. It is a unix, however, because it
      looks like a duck, quacks like a duck, and even smells like a duck. It
      conforms to POSIX standards, it does everything UNIX and/or Unix does
      that makes them unices as well, and it is very nearly indistinguishable
      from other unices to the casual user. Linux was created entirely from
      scratch, programmatically, and was basically created by observing the
      behavior of other unices and figuring out how to write code that will
      do the same stuff.

      Linux and *BSD are both “open source” OS families. Such proprietary
      unices as SysV, Solaris, AIX, HP-UX, and so on, are not. Linux is
      “copyleft”, whereas *BSD is not. Here’s why:

      Linux is licensed under the GPL (General Public License). The
      General Public License requires that when distributing binaries
      (compiled, executable programs), you have to provide the source code as
      well. It requires that you do not restrict others from further
      distributing and modifying that code. It also requires that later
      modifications and distributions of that code are released under the GPL
      as well.

      The various *BSD kernels are licensed under the BSD license (thus
      the name). This includes FreeBSD, OpenBSD, and NetBSD, among other
      (less well-known) BSD OSes. The BSD license allows you to redistribute
      both binaries and source code as you see fit. It also requires that you
      do not restrict others from further distributing and modifying that
      code. It does not require distribution of source code, though it does
      (as already noted) allow it. It does not require that later
      modifications and distributions of that code are released under the BSD
      license, either.

      Software such as the proprietary UNIXes and Microsoft Windows are
      released under standard copyright, as modified by EULAs (End User
      License Agreements). Copyrighted software that is not released under
      other licenses is restricted from being copied or distributed in any
      form at all except in accordance with “fair use” provisions, which
      pretty much state that if it’s useless to you without duplicating it
      you’re allowed to duplicate it, but only for purposes of such use.
      Other than that, everything’s restricted, by and large.

      Before I touch on one more licensing scheme, I’ll explain how the
      various open source buzzwords fit into all this. First, there’s
      “copyleft”: if an open source license is “copyleft”, that means that it
      is automatically inherited by derivative works and copies. This means
      that if you modify and redistribute something issued under a copyleft
      license, that modified version is also distributed under the same
      copyleft license.

      The term “open source” refers to software for which the source code
      is open for viewing, modifiable, and redistributable. A similarly
      applied term, “free software”, refers to source code wherein it is
      required that the source code be made available to anyone that has
      access to the binaries. There’s another, far less used term, that
      refers to software that is all about allowing you things without
      requiring anything: it allows distribution and modification of binaries
      and source code without requiring it, in essence. This term is
      “software libre”. There’s some dispute over what these terms actually
      mean, but the general consensus and understanding of the terms seems to
      be precisely what I’ve relayed here. Both “software libre” and “free
      software” are “open source software”, but “software libre” is not “free
      software”, and “free software” is not “software libre”.

      The GPL is a free software license, an open source software license,
      and it is copyleft. The BSD license is a software libre license,
      an open source license, and not copyleft. The CCD CopyWrite is a
      software libre license, an open source license, and it is
      copyleft.

      CCD CopyWrite is a license I created specifically because I saw the
      need for a true software libre license that was also copyleft. In
      essence, “software libre” is the state of licensing that replicates the
      conditions of the public domain (absent outside influences). You can do
      anything you like with your libre licensed software, and so can anyone
      to whom you give it: there are no legal restrictions on modification
      and distribution of the content. Only laws relating to tangential
      matters apply to software libre, such as laws relating to fraud (no
      lying about the performance or attribution characteristics of a piece
      of software). By creating a copyleft libre license, I’ve set aside a
      “protected public domain”, wherein licensed works can be treated as
      though they are within the public domain, but unlike the actual status
      of public domain works they cannot be re-copyrighted and “removed” from
      the public domain after modification to produce a derivative work.

      There you have it. I’ve made some generalized statements about what
      UNIX, Unix, and unix are, how Linux and *BSD fit the picture, what the
      various open source software categorization buzzwords are, and some
      licensing examples to fit the different categories.

      Note 1: Despite spurious claims to the contrary, no Windows was
      ever really POSIX compliant. Some components of the Windows NT system
      have, in various versions, been POSIX compliant to one degree or
      another, but it has never been a POSIX compliant OS. NTFS was at one
      time POSIX compliant: whether or not it still is compliant is something
      of which I’m not really sure. NTFS has undergone so many changes over
      the years that it’s almost unrecognizable as being related to the
      filesystem that originally bore that name.

      Note 2: I stated that Unixes come from a common ancestor. I did not
      identify that ancestor, though I hinted at both AT&T UNIX and BSD
      Unix. I made reference to BSD code in UNIXes, but did not specify how
      much of the OS is traceable to BSD. This was intentional. While the
      facts I’ve related are essentially indisputable, the opinions that can
      be derived from those facts are often in reasonable dispute. Since
      my purpose here isn’t to address that dispute, I avoided it.

      UNIX

      Linux

      *BSD

      GNU General Public License

      BSD License

      CCD CopyWrite license

    • #3180312

      hacker (n.): one who hacks

      by apotheon ·

      In reply to bITs and blogs

      There’s a term being bandied about in the media, and being used
      improperly, with dismaying regularity. This term is one that relates to
      IT professionals and enthusiasts and their shared culture. It is a term
      that helps to set us apart from the rest of the world’s population by
      our appreciation of a certain ethic, a certain aesthetic, and a certain
      metasociety that cannot be understood without exposure to, and (perhaps
      more importantly) enjoyment of, the computer geek’s world.

      The term I’m talking about, of course, is “hacker”. In the news
      media, in the press releases of corporations like Microsoft, and in
      mainstream cinema, the term “hacker” is divested of its real meaning
      and granted instead only the sinister characteristics of the computer
      criminal. This has, I think, come to pass because those outside of
      hacker culture probably never bother to notice any hacking going on
      around them unless it affects them directly and, once in a while, that
      hacking might consist of someone testing and even penetrating the
      security of computers and computer networks. To assign the term
      “hacking” only to such activities, though, is the same as assigning the
      term “pilot” only to terrorists who fly jumbo jets into skyscrapers,
      “golfer” only to those who cheat at the game of golf, “driver” only to
      those who drive while intoxicated and end up killing pedestrians, or
      “parent” only to those who molest their children.

      It’s worse than that, actually. Not all child molesters are parents,
      not all killers are drivers, not all cheaters are golfers, not all
      terrorists are pilots, and not all who crack security on computers and
      computer networks are hackers. Many, in fact, are script kiddies whose
      closest brush with actual hackers is using a network security auditing
      script some hacker wrote eight years ago. Remember that little problem
      with Newsweek inaccurately reporting the contents of an FBI memo,
      sparking a riot that killed 16 people? They’re just as wrong, and far
      more often, in the way they report computer crime.

      The term “hacker” is used at times to refer to people outside of
      computer system enthusiasts, and that’s fine. I’ve yet to see a
      non-computer-person misuse the term when referring to what they do.
      I’ve even seen people refer to themselves as hackers of “reality”,
      meaning of course that they’re screwing with the common perceptions of
      the dominant paradigm. Good for them. Let’s comfort the disturbed and
      disturb the comfortable, and call ourselves saints and hackers for
      having done so. It’s pretty difficult to find any true hacker culture
      outside of enthusiastic computer users, though.

      The term arose with the tech model railroad club (TMRC) at MIT in
      the 1960s, particularly amongst a group of members of the club who were
      also involved in the goings-on of the MIT AI (artificial intelligence)
      lab. From there, it began to be applied to other computing enthusiasts
      unrelated to TMRC, and a vast culture of hacking arose, including its
      own jargon, ethics, value system, and worldviews. As RFC 1392,
      the Internet Users’ Glossary, defines it, a hacker is “A person who
      delights in having an intimate understanding of the internal workings
      of a system, computers and computer networks in particular. The term is
      often misused in a pejorative context, where ‘cracker’ would be the
      correct term.” There’s also a reference to the term “cracker” in RFC
      1392, not to be confused with the racist insult usage of the term, nor
      with that usage of the term that denotes a snack food.

      Early hacker history is loaded with the stories of giants who walked
      the earth. Somewhere in the middle, there was a distinct paradigm shift
      coinciding with the move from OSes and computers that were wedded to
      each other to unix, the first really modular, portable separation of
      the OS from the hardware ? or, at least, the first one that really
      caught on. This can be blamed, of course, on the concurrent creation,
      or synchrogenesis (to coin a term), of the C programming
      language and the Unix operating system. While the Internet was already
      underway before unix began to play a substantial role in it, it was
      unix that gave it the first major push toward being a public
      environment. The various unices have been the primary OS of choice for
      hackers in general ever since. There are those few true hackers that
      simply don’t use the unix environment, of course, but they are an
      exceedingly rare breed. Most people that work with computers outside of
      the realm of unix are professionals or end-users without the real
      essence of the hacker, or are strictly hardware hackers, a strange
      breed indeed. Even those hackers that have created their own OSes along
      the way generally came from unix and eventually came back to it, too.

      In the late ’70s and early ’80s, the growth of the PC industry began
      to see the independent and convergent evolution of a new class of
      computer users. They weren’t a culture, yet, though. They had terribly
      underpowered little “toys” that didn’t even have the ability to
      effectively communicate with each other over the Internet. This is one
      reason many people don’t realize just how old the Internet is: if they
      know anything about the history of computer networking with PCs, they
      probably think back to the bad ol’ days of dial-in BBSes before PCs
      could touch the Internet. It was the ISPs like Prodigy and AOL that
      ultimately brought the Internet to the masses (thank goodness we’ve
      moved on to better options now), by giving PCs something to dial into
      that would then connect them to all the wide world of the Internet, and
      it was the web browser and email that made it something worth doing.
      Then, in the early ’90s, just before the release of Windows 3.11,
      hacker culture met the scattered PC enthusiasts, and that convergent
      evolution finally came to its merging point. Linux and BSD for the 386
      were created, almost simultaneously. Both were made open source, as
      well, which suited the hacker ethic perfectly. The hacker’s home OS was
      born, and it was twins.

      Generally, one does not decide to become a hacker and pursue any set
      of required tasks to get there. It’s not a profession with certifying
      authorities, though there is a certain amount of semi-official
      recognition that cements one’s place in the culture. It’s not a skill
      set that one acquires at school or on the job, though one is never a
      hacker without skill. It’s not an attitude, though without the right
      attitude all you’ll ever be is a programmer, or a script kiddie, or a
      network administrator, or an end user, or a wannabe, or perhaps worst
      of all a suit. Hacker culture is something of a meritocracy,
      but mere ability isn’t everything: there’s also the ethic and the
      aesthetic sense, for instance. It’s all something you can’t just study
      and understand. You have to grok it.

      That’s not to say that hackers never disagree. They not only
      disagree, but can do so very noisily, obstinately, at great length.
      They even disagree regularly on subjects as fundamental as what exactly
      it is to be a hacker. Find two hackers and ask them what being a hacker
      means: if they don’t just quote RFC 1392 or the Jargon File at you,
      you’ll get two different answers. You might even get three. Put them in
      a room together, and they may argue it to death, and they may both end
      up with different opinions than those they had when they started, but
      they’ll still probably disagree on some fundamental points. If both are
      real hackers, though, they’ll surely recognize each other as such by
      the time a truce has been called and the dust has settled.

      For my part, I’ve been called a hacker by several people who know
      what the term really means, independently and without prompting. These
      are people who recognize that I have some skill, and that I grok the
      hacker life ? and I really do understand it on that visceral level.
      It’s commonly accepted (if usually unspoken) tradition in hacker culture that it’s better to
      be identified as a hacker by someone else, someone that knows what he
      or she is talking about, and among my credentials is recognition by a
      bona fide rocket scientist who’s been as much a real hacker as anyone
      I’ve met for longer than I’ve known there was such a thing as Linux
      (and she has been using Slackware since version 1.x). Guess what: I
      dispute their claims. I’m not sure I qualify. It’s that pesky skill
      thing, you see. I have the enthusiasm and the interest and all the rest
      of it, but somehow I’ve just never really gotten immersed enough in
      certain key activities (programming, foremost among them) to develop
      more skill than that of a dabbler in hacking. I mean, really, there’s
      an assumption in the term “hacker” that, to be one, you have to “hack”.
      I’ve had some close brushes with activities that carry that name, and
      I’m even the recent founder of a very small hacking club, of sorts, but
      as for real experience in hackish activity ? well, it’s a little sparse.

      Some of these people who have thusly granted me title certainly know
      more than I do about the matter. Perhaps I should defer to wiser heads
      than mine. I know I don’t want to be the wannabe that self-identifies
      without proper justification, though. I’m not comfortable accepting
      that apellation at this time. I may never be.

      I know I get annoyed when some idiot reporter or Microsoft marketing
      executive uses the term to describe something lower than the scum on
      the soles of my 14-hole Doc Marten boots, though.

      • #3180962

        hacker (n.): one who hacks

        by Jay Garmon ·

        In reply to hacker (n.): one who hacks

        You know you’re a real hacker if you don’t want to be called one. Anyone claiming to be a hacker is a script kiddie with delusions of grandeur.

      • #3180873

        hacker (n.): one who hacks

        by jmgarvin ·

        In reply to hacker (n.): one who hacks

        It drives me crazy when I teach my Hacking class and my students think
        it is all instant gratification kind of stuff. I have to explain
        to them OVER AND OVER that you really have to examine your target
        carefully, poke and prod it a bit, and THEN attack.

        I also have a hard time getting some of them past the script kiddie
        mentality. My best trick is to show them how to forge email (a
        party trick to be sure) and explain that it is the tip of the iceberg.

    • #3181046

      living history

      by apotheon ·

      In reply to bITs and blogs

      I’ve seen him, and probably even exchanged words with him, a couple
      of times before last night. I just never realized who he was until
      conversation over a table at Denny’s.

      Paul Kunz is
      awesome. The first webserver to be set up outside of Europe was his
      fault, at the Stanford Linear Accelerator Center where he spent much of
      his time in 1991. He came back from meeting with Tim Berners-Lee of
      CERN, the creator of the World Wide Web, and along with a couple of
      colleagues got North America into the Web for the first time.

      I’m an avid user of Linux. I use Debian GNU/Linux whenever I can get
      away with doing so. When I have to use another distro (or, heaven
      forfend, Windows) for work-related activities, I simply have less fun.
      Debian makes me feel good. It’s not just Debian, though: my favored
      computing environment uses the GNUstep framework with the WindowMaker
      window manager. It turns out that Paul Kunz was the guy that kicked off
      the GNUstep project in the early ’90s, too.

      Last night, we sorta introduced ourselves to each other and
      conversed, along with about a dozen other Linux enthusiasts at a LUG
      meeting in Denny’s. I couldn’t believe my luck when I relized who he is
      and what he’s done. I even put together sort of a programming
      enthusiasts’ study group a few weeks ago in part so that I’d have an excuse to work
      with others on learning Objective-C, the programming language in which
      GNUstep is primarily implemented. Yes, I like GNUstep that much.

      These days, I make more money doing web programming than anything
      else. I’m using technologies that this man pioneered on this continent
      about a decade and a half ago.

      I guess, in a way, he’s sort of a hero of mine. I certainly admire the man.

      . . . and last night, I met him. He’s a nice guy, very personable. I
      wonder what his reaction will be when he finds out I started a
      Wikipedia article about him.

      • #3180969

        living history

        by Jay Garmon ·

        In reply to living history

        THAT. IS. SO. COOL.

        Hey, if this LUG gets off the ground, let me know. We may want to talk to you about our TechRepublic Chapters project.

    • #3172431

      T3H: programming for fun and profit

      by apotheon ·

      In reply to bITs and blogs

      Here it is, Friday morning. Every Thursday evening now, I attend T3H
      meetings. I get together with some friends that have some similar
      interests, and we . . . learn stuff. It is, so far, sort of a
      programming study group, with the primary goal of helping its members
      learn stuff. I put it together in the first place (yes, I’m the person
      who organized it) from members of a local LUG as an excuse to create an
      environment where I wasn’t learning new programming skills alone and in
      a vacuum. I wanted more people to play off of, in person, where we’d be
      there specifically to help each other with new concepts and new
      implementations.

      It all started with Objective-C, but as of last night that has
      changed. The group member who knows the most about the language was
      running into a wall, in part because of a lack of the necessary skills
      to be leading the charge alone and in part because he doesn’t actually
      know the syntax of the language well enough to be teaching it. As such,
      the whole group hashed out the needs, wants, and skills of the group as
      a whole. From this, we developed a new, more useful plan for the future.

      This, of course, perfectly suits one of the first rules of good
      software design: plan to throw the first one away, for any given value
      of “one”.

      So, Objective-C is on the back burner (for those of us who plan to
      ever revisit it). We’re going in new directions. The new directions, in
      this case, number three, with three different languages that we’ll be
      studying as a group, simultaneously.

      Basically, we’re going to be attacking essential concepts of
      programming, in each case first in theory (by discussing the concepts),
      then in usage and implementation (by discussing how they’re used and,
      possibly, by working up pseudocode), and finally in practice (by using
      the three example languages we’ve chosen to give these concepts useful
      form). Yes, three languages.

      See, there are three different basic types of languages that we have
      decided we need to learn in order to get the full run of important
      theories of program design. Each of us knows at least one programming
      language (and by “programming language”, I mean “turing complete
      language”, and by “knows”, I mean “has at least a functional familiarity with”), and the number of languages known with any real facility
      varies between us from “one” to “half a dozen or more”, but nobody has
      as complete a knowledge of all three of these types of programming
      language methodologies as we’d like. The three types are procedural,
      functional, and object-oriented*.

      The representatives of each language type are C for procedural
      programming, Scheme for functional programming, and Ruby for
      object-oriented programming. In addition to procedural, functional, and
      object-oriented programming methodologies, however, these languages
      also represent a second trinity, this time of programming aesthetics,
      though this aspect of the reasons to choose the languages went largely
      unremarked: the “shiny new thing”, the “obscure, eccentric genius”, and
      the “venerable workhorse”.

      In Ruby, everything is an object; it is a “true”, or “pure”,
      object-oriented language. It’s the useful scripting language child of
      Smalltalk and Perl, in essence. It has a reputation for power,
      flexibility, rapid development, intuitive ease of learning and use, and
      a whole bunch of other wonderful (if improbable, by some measures)
      characteristics. It’s an example of a “shiny new thing” language, as
      well, fitting it into the trinity of programming aesthetics.

      In Scheme, everything (or at least nearly everything) is a function.
      I’m less familiar with what makes Scheme what it is than I am with
      Ruby, though from what I understand it is about as close to a “true” or
      “pure” functional language as you’re going to get without resorting to
      simply writing programs in Lambda calculus (which would be akin to
      trying to communicate using English without nouns). Scheme is one of
      the Lisp family of languages, which as a whole comprises one of the
      canonical examples of the “obscure, eccentric genius” programming
      aesthetic. In fact, the two primary approaches to Lisp seem to be
      embodied in Common Lisp (a large, sprawling language) and Scheme (a
      more succinct, terse language that grew from the same base). Granted, I
      could be mistaken. This will be my first-ever foray into the land of
      Lisp and its imitators.

      Finally, of course, there’s C. C is the procedural representative,
      though it is not the most “true” or “pure” procedural language out
      there. It is, however, procedural enough to get the point across by a
      fairly wide margin, and it’s a language that every serious code hacker
      should know. Period. It’s the language to which all modern languages
      owe their compilers and interpreters, most of them owe their syntax,
      and all owe their running environments. It’s the language in which
      operating systems are written, because you don’t want to do everything
      in kernel development with automatic garbage collection or object
      oriented design. It’s the only language developed specifically because
      a high-level tool for operating system design didn’t already exist. It
      is, truly, the practical, pragmatic, ubiquitous, venerable old
      workhorse of programming languages. Unix and C are each the other’s
      mother, in a strange sort of incestuous, mutually procreative,
      synchrogenetic relationship. It’s going to be the example language
      representing procedural programming, and that’s that.

      Conveniently enough, most of what makes Objective-C what it is will
      be covered by these languages ? particularly, C and Ruby. Objective-C
      is a true superset of C, unlike that OOP (object-oriented programming)
      kluge C++, which means that if you learn all of C you’ll know all of
      Objective-C except the OOP structure. Ruby gets its OOP structure ? its
      OOP semantic elements ? from Smalltalk, which is precisely the source
      of Objective-C’s OOP structures. Between C and Ruby, then, Objective-C
      will be only a step away. Well, good.

      I’ve lucked out. I get to learn Ruby and Scheme from the ground up,
      and I frankly haven’t done much with C either. I get to learn a whole
      lot of stuff here. For the small price to pay of organizing and
      coordinating all this, I get to learn everything without having to
      shoulder the responsibility of contributing superior expertise. I get
      to go back to just being a student again. Because I have experience
      with other languages (like PHP, Perl, and Object Pascal, for instance),
      and because I enjoy reading books about programming theory**, I also
      have free reign to comment and rattle on and not be perceived as a
      prejudicial idiot. This should be great fun.

      Did I mention that I get to learn? That’s such a barrel of fun that
      I don’t know what to do with myself. Happy times are ahead.

      * = Objectional? Hah. That’s funny.

      ** = I quite strongly recommend both Eric Raymond’s The Art Of
      Unix Programming
      and The Pragmatic Programmer by Andrew Hunt
      and David Thomas. They’re thoroughly excellent books on programming
      theory.

      • #3172341

        T3H: programming for fun and profit

        by jmgarvin ·

        In reply to T3H: programming for fun and profit

        Let me add Beginning Linux Programming Stones and
        Matthew. I also quite like Advanced Programming in the
        Unix Envrionment
        Stevens.

        A good book to pick up C is A Book on C Kelley and Pohl.

      • #3170276

        T3H: programming for fun and profit

        by agilbertson ·

        In reply to T3H: programming for fun and profit

        Actually, there are three elements to Scheme: Functions, atoms, and
        lists.  An atom is just a peice of data (character string, numeric
        data, pile of bytes); a list is a group of atoms or lists; and
        functions are…well, functions.

        At least, that’s the way I remember it.  While it may not be
        totally correct, thinking of it that way lets me actually write
        programs in Scheme.  You should probably check out the PLT Scheme
        website; they’ve got some resources on learning Scheme and a fairly
        decent IDE/interpreter called DrScheme.  (I used this when I was
        taking Matthias Felleisen’s courses at Northeastern.)

    • #3170329

      Understanding OSes: Booting

      by apotheon ·

      In reply to bITs and blogs

      This is part one of the Understanding OSes series. Find more at the Table of Contents.

      When a computer first boots, it starts up the BIOS (Basic
      Input-Output System), a sort of rudimentary operating system designed
      to work (hopefully) without user intervention. Its purpose is to find
      and activate the bare minimum hardware needed to allow the discovery
      and activation of an operating system that is actually designed to
      interact with the user. The BIOS provides a central sort of
      abstraction that accepts input from various devices and offers output
      interfaces from which devices can receive data. This allows your
      computer to trade information between components so that the processor
      can access an operating system on your boot media (hard drive, floppy
      disk, et cetera), load necessary bits of it into memory, and ultimately
      begin running software so that it interacts with humans in the real
      world in some way.

      This leads to booting the operating system. Interestingly enough,
      the operating system itself then has to redo everything the BIOS did,
      but it has to do it in its own (generally much more complex) fashion so
      that more than just the bare minimum hardware is detected and
      activated, and so that more complex operations can be attempted. This
      entire process is referred to as “booting”, an evolution of the term
      “bootstrapping”, which in turn arose from the quaint image of pulling
      oneself up by one’s bootstraps. The idea is that the OS must create its
      own foundation for operation in order to run effectively. It would be
      very much like trying to lift oneself off the ground by tugging on
      one’s bootstraps if the BIOS wasn’t there to provide a point of
      leverage. In the case of large-kernel OSes (which is to say: anything
      you’re likely to encounter in a comparative discussion of modern OSes),
      there is actually an intermediary step called the “bootloader”, because
      the BIOS is in fact too limited to access enough system resources to
      load the entire kernel all by itself. Instead, it fires up the
      bootloader, that intermediary pseudo-OS, which then in turn does for
      the operating system proper what the BIOS did for it.

      The most fundamental part of the operating system consists of the
      kernel. In the process of discovering hardware, activating it, and
      creating software interfaces with hardware (usually through the HAL, or
      Hardware Abstraction Layer, in modern personal computer OSes), your OS
      probably does some autoprobing. What this means is that the kernel’s
      boot process has a set of data describing hardware types, and it looks
      around at what is available to see if anything matches the hardware
      definitions in that set of data. It thus probes for hardware and keeps
      track of what it finds. Where it’s looking for this data is at I/O
      ports, which are “addresses” of data buses. A data bus is basically a
      pathway along which data can travel between hardware components, and
      I/O ports are bus addresses that relate to data input and output
      functionality.

      If you’re using an operating system that displays the results of
      hardware autoprobing, you’ll see a series of text messages on the
      monitor of your computer (or hear it on a speech-only system, or
      whatever suits your particular setup; for all I know, some of you might
      operate systems by smell, though that would certainly require a very
      customized setup). As it recognizes hardware at these I/O ports, your
      OS kernel in some manner loads drivers: software components that
      provide a means of communication between the hardware components and
      the internal workings of the OS. In most modern OSes, there is a
      particular part of the kernel whose job it is to gather input from
      hardware and provide it to the rest of the kernel, and to pass kernel
      output on to the I/O ports. That part of the kernel is the HAL, and it
      provides a means for most of the kernel to be the same no matter what
      hardware platform is being used to run the OS. Only the drivers and HAL
      need to change between hardware platforms, providing for a great deal
      of OS portability so that the hardware doesn’t always have to be
      exactly the same.

      This all leads us to the first two major sticking points in
      comparing the major personal computer OSes with which we’re familiar.
      How autoprobing is handled and how the HAL is designed create
      significant differences in how the OS behaves under certain
      circumstances. DOS and Windows (which inherits its autoprobing
      capability almost unchanged from DOS, even after all this time) use a
      very rigid, inflexible mechanism for hardware autoprobing. This
      mechanism is bad at identifying hardware because it doesn’t listen very
      well. What it does, in general, is ask each I/O port “Are you this?”
      over and over again for many values of “this”, where each “this” is
      some piece of hardware the OS has been told to expect. If you have
      installed drivers tailored for a given piece of hardware at a given I/O
      port, the kernel will default to that driver first and ask the hardware
      “Are you this, for which I have a driver?” When there are no specific
      drivers, the kernel must try only the very limited set of options given
      to it at the “factory”, one at a time, until it finds something that
      looks like a match.

      If you happen to have more than one set of drivers installed that a
      particular type of hardware’s answer might resemble, you might get a
      hardware conflict situation. A piece of hardware that lurks beyond a
      given I/O port will be asked “Are you this?” and will respond “I fit
      that description”, but if the driver is actually for another piece of
      hardware that is simply similar enough that the OS isn’t sure how to
      differentiate, you get the wrong driver associated with a piece of
      hardware. The autoprobe then fails. Microsoft tried to solve much of
      this by introducing the PnP (Plug and Play) system, which essentially
      consists of a huge database of drivers pre-installed but only activated
      when hardware matches up with them, and by increasing the number of
      characteristics the kernel uses to describe a hardware expectation when
      probing at a given I/O port. This creates an inversely proportional
      relationship between the amount of time spent autoprobing and the
      likelihood that the right drivers will be identified for a given piece
      of hardware.

      The various unices tend to differ somewhat in how good they are at
      autoprobing. This difference largely depends on how much developer time
      has gone into hardware interaction performance for a given unix OS.
      Some unices have placed more focus on security utilities, some on
      number-crunching, and so on. Linux has, from the beginning, had a lot
      of attention lavished on the performance of the kernel in relation to
      hardware. One benefit of this has been a lot of attention on
      autoprobing, and as a result the Linux kernel is very, very good at
      autoprobing. It basically listens better than most other OSes, and is
      designed to be very good at handling the data it gets from I/O ports
      during autoprobing to choose driver modules. The term “driver modules”
      will come up again later, by the way, in another of these articles.

      There are those who theorize that it is the cleverness of the Linux
      kernel’s autoprobing that allowed it to be as rapidly successful among
      developers (who are also users) as it has been. By being good at
      autoprobing, it was easier to install than other unices: unices in
      general have lacked “user friendly” installers for years, with the
      exception of those proprietry unices that have been closely wedded to
      proprietary hardware platforms so that autoprobing is largely
      unnecessary. This means that the user who wishes to install a unix on a
      given computer must do so with a fairly rudimentary installation
      interface, if there is a cohesive installer at all. With the original
      Linux installations, this actually had to be accomplished with no
      installer whatsoever. Instead, a boot floppy was used to get started,
      and part of the process involved compiling a kernel from source on the
      machine on which you plan to run it. For those who know this process,
      at least in theory, no further explanation is needed. For those who
      don’t, suffice to say that it is a long, involved process, and largely
      irrelevant for this exposition on the OS boot process.

      If it wasn’t for the fact that it is as good at autoprobing as it
      is, Linux would have taken much longer to move beyond that stage and
      further advance in market share and mindshare. It attracted developers
      because it is a unix, with all the power and flexibility that implies,
      and because it was much easier to install than its brethren. Being an
      open source project, the Linux kernel’s popularity among developers
      also ensured that it got more development, not only in improving on its
      already clever autoprobing capability, but also on everything else it
      did. As such, it broke away from the pack early and gained popularity,
      performance, and functionality more quickly than it would otherwise
      have done. That’s the theory, anyway, and that’s why it’s now looking
      at the situation of being a real contender for market niches previously
      thought to be the sole province of Microsoft and Apple, with
      bit-players like Amiga, BeOS, and NeXTstep momentarily hovering at the
      fringes as technically superior, but undermarketed, alternatives.

      Once hardware is identified, and drivers are in place, the first
      stage of the boot process is complete. Your computer has reached what
      unix hackers often call “run level 1”. The next step involves running
      an initiating process, often called “init”. This process checks storage
      media out and starts up other processes, such as unix daemons and
      Windows services running in the background. These are the programs like
      print spoolers, incoming mail listeners, and local web servers, which
      are always paying attention for possible incoming instructions whether
      from the network or from user applications that might call on them.
      Once these background processes are running, the initiating process
      will start up your interface (at least, that’s the order of things in
      theory, though Windows often violates that and unices can be made to do
      so with startup scripts). In unices, getty (or equivalent) is started
      to watch consoles for command line input at the shell. In Windows, a
      GUI environment is started immediately, and in modern Windows versions
      the console input processes aren’t started at all unless the user or
      some user application accesses the command line.

      Then, finally, user applications and high-level demons and
      background services are started. While your network may be initiated
      early in the init process in Linux, for instance, networking servers
      (such as an SSH server, an FTP server, and so on) are started after the
      basic interface processes are begun. The same holds true for Windows.
      This is because such processes actually use the interface as part of
      their operation. Where the default interface is the GUI (Graphical User
      Interface), where any CLI (Command Line Interface) is actually an
      application running inside the GUI, this imposes quite a lot of
      resource overhead on the operation of such services. This is part of
      the reason that so many services in Windows have been incorporated into
      the ever-more bloated kernel, and that those that haven’t been
      incorporated into the kernel often make use of services that behave in
      unsafe ways such as by inappropriate use of RPCs (Remote Procedure
      Calls), in an attempt to recapture performance otherwise lost to the
      GUI’s RAM and CPU demands. Such tricks to bypass the security enforced
      by a strict separation of system processes from user processes are not
      necessary with unices because of the fact that the GUI isn’t integral
      to the OS, and thus doesn’t impose the same resource inefficiency on
      the higher level services.

      I don’t mean to suggest, of course, that these are the only reasons
      such things happen, or that these are the only consequences that
      proceed from those causes, but they are the most relevant to this topic.

      This concludes part one of the Understanding OSes series. Part two is Understanding OSes: Kernel Modularity

      • #3171626

        Understanding OSes: Booting

        by jmgarvin ·

        In reply to Understanding OSes: Booting

        Cool!  I look forward to that!  Kernel modularity is one thing I’m really keen on and I’d like to hear more!

    • #3171043

      Wikimedia downtime

      by apotheon ·

      In reply to bITs and blogs

      Tuesday morning this week (tomorrow), there will be a Wikimedia planned network outage at approximately 7AM UTC (that’s 3AM EDT, which is local time for the servers in Florida). The reason is that we’ll be moving the entire network from one physical location to another. The span of this outage is at this time unknown, but should’t be terribly long, barring unexpected delays. The most popularly known website that will be affected by this will be Wikipedia. I think the admin devs are planning to switch access load to squids in France, providing read-only access for the bulk of the network outage, but it’s entirely possible that I imagined that aspect of it all. I’m just the datacenter technician.

      This means I’ll be working with several others unplugging, hauling, and reconnecting servers in the wee hours of the morning. Should be fun.

      • #3171004

        Wikimedia downtime

        by Jay Garmon ·

        In reply to Wikimedia downtime

        Yipes! Better get cracking on my next Trivia question before my crutch gets taken away.

    • #3171541

      quotes from IRC

      by apotheon ·

      In reply to bITs and blogs

      16:31 d “Much of the cruft results from C++’s attempt to be
      backward compatible with C. Stroustrup himself has said in his
      retrospective book The Design and Evolution of C++ (p. 207), “Within
      C++, there is a much smaller and cleaner language struggling to get
      out.””
      16:32 me . . . and it’s Objective-C.
      16:32 me Actually, it’s probably not.
      16:33 d could be java.
      16:33 T c# ?
      16:33 d it is cleaner and smaller, just in the wrong ways…
      16:33 * T awaits the flames
      16:33 d C# is java with a different syntax…
      16:34 T runtime is a bit different too
      16:34 T although you could argue that the language is independent from the runtime
      16:34 c c# is a little better lang wise than java
      16:34 c but they still suck
      16:34 d
      the fact you have a language that runs in a virtual machine yet has
      introspection slower than Smalltalk on 20 year old hardware says
      something…
      16:35 T hehe, until last year washington mutual ran all their home loan software on a smalltalk app on os/2
      16:36 d cool
      16:37 T then they replaced it with a browser-based javaish frontend on top of MS xslt crap middleware on top of MS crap servers, spent
      billions of dollars, and cut productivity down to 1/5 of what it was
      16:38 d wow.
      16:38 T amazing how a company like that can be so IT-stupid
      16:40 me funny as hell, too
      16:40 T funny if you don’t work for them…
      16:41 c verizon is trying to do the same
      16:41 c but they havent been able to eliminate their mainframe dbs
      16:41 c they cant duplicate the functionality

      Of course, I blame Sun for all this. Java was a nifty idea, and some
      good ideas were incorporated into it, but the truth of the matter is
      that the implementations of Java that are actually advantageous are
      quite limited. It’s a “virtual machine”-based language, compiled to
      “bytecode” that is then interpreted by a VM at runtime, which means
      that execution is slow ? in some cases, slower even than languages that
      use a traditional interpreter. In addition, much of the reason for
      Java’s failure to live up to expectations for WaMu, despite the fact
      that Smalltalk is also a VM language, is that the JVM is basically
      broken by design. I’m not as clear on the internals of the Java virtual
      machine as some, but I know people whose judgment on the matter I trust
      with nothing good to say about the JVM.

      Much like Microsoft, though, Sun subscribes to the notion that
      something new should be sold to everyone as a panacea. Thus, Java has
      been put into use writing static platform applications, games, and even
      server software. Server software! That’s nuts. The whole point of Java
      from the beginning was portability of client application code. What
      advantages Java can provide are all best suited to client software in
      unknown computing environments. For some reason, though, a language
      whose implementation is anything but spry, combining the performance
      and flexibility detriments of interpreted and compiled languages in one
      single package, is being used for server-side dynamic webpages,
      database management systems, server-side accounting software, and
      everything else under the Sun. Speaking of that, I do blame Sun. Java
      has been made a buzzword, and as a result it has been used in numerous
      implementations that are entirely inappropriate for its use.

      Java employs some C-like syntax and Smalltalk-like object oriented
      structure, but manages to screw them both up; it is essentially what
      Objective-C would be if designed by a marketing executive instead of a
      mathematician ? broken, but capable of fairly portable code (just as
      C++ is what Objective-C would be if designed by a computer scientist ?
      broken, but at least a good performer). Of course, that portability can
      as easily be achieved by use of framework libraries and good OOP
      modularity rather than a virtual machine. You really can’t fight hype,
      though.

      Now, we’ve got .NET, which is a little closer to what Java should
      have been. There’s even a Java spin-off language called J# that’ll run
      on the .NET framework (which, despite the name, is really just a
      glorified VM with extensions). .NET (and its non-MS implementations,
      including Mono) is almost as limited in appropriate scope as Sun’s
      Java, but we can expect that it will be pushed as the next great
      panacea. People will be trying to use it everywhere, for everything.
      ASP.NET is in full swing now, for instance, despite the inadvisability
      of running .NET server-side. What good is bytecode-compiled server-side
      software, anyway? Either run something interpreted (or, even better,
      compile-at-runtime), or just use a compiled language for better
      performance.

      . . . or, you can continue training for your career as a
      Pointy-Haired Boss in the grand tradition of Dilbert’s manager. Have
      fun with that.

      • #3171384

        quotes from IRC

        by jmgarvin ·

        In reply to quotes from IRC

        What!?? .Net isn’t the silver bullet?  It isn’t the ultimate pancea!?  NO!

        While I think the JIT that MS built is far more capable, I see MAJOR
        security issues down the line with ASP.Net.  I don’t know in what
        for they will take, but I have a feeling it will be holes in the .Net
        framework or with the way ASP makes server side calls to the .Net
        “stuff” it needs.

        ‘Course I’m a PHP kinda guy anyway, so I’m a little biased.

        My big question is what ever happened to REAL interpreted
        languages!??  Something like Perl or Python (I’m not starting a
        Perl vs Python war…many were left dead or mamed after the Perl vs
        Python battle of ’00).  I love how these language work and how
        they interact with their environments.  While they aren’t perfect
        (ok…Perl is 😉 ) they do do some things VERY well.

    • #3182981

      cheatsheets

      by apotheon ·

      In reply to bITs and blogs

      So I’ve been absent from this blog for a bit. Get over it.

      In other news, I’ve decided to start aggregating helpful “cheatsheets” on one of my websites. They’ll be plain text files, each relating to a single area of interest. You’ll be able to find them at http://www.apotheon.org/cheat for your browsing convenience.

      So far, all I’ve got there is a set of basic executable commands that are very commonly used from the shell on Linux systems. Some of the commands are specific to the Debian distribution, at the moment, but I may eventually create a viewing script for these cheatsheets that will allow the viewer to filter the content of each cheatsheet to suit his or her needs (such as filtering out Debian-specific commands, and add in Fedora-specific commands). For now, though, it’s just a text file. Since most Fedora-specific commands open captive interfaces anyway, they wouldn’t really be appropriate for this particular cheatsheet.

      Look forward to stuff about specific configuration files, useful captive interface tools like vim, and maybe even stuff not strictly related to computers. I also might throw some of my deep, but these days largely unused, Windows knowledge in there to help the unenlightened hoi polloi out with their own aggravations.

      That is all. Carry on.

      • #3182882

        cheatsheets

        by jmgarvin ·

        In reply to cheatsheets

        I look forward to seeing the cheat sheets.  I like what you have so far!

        Glad to see you blogging again, and no, I won’t get over it 😉

    • #3184047

      two minor epiphanies

      by apotheon ·

      In reply to bITs and blogs

      1. All bloated GUI apps are unstable. Really. I know some of you are
        out there thinking “No, but not such-and-such an application! This
        application is very stable.” Well, sure, compared to (for instance)
        Outlook. Compared to a daemon that runs in the background, though, or a
        command-line tool, it’s downright vertiginous. I mean, compare Firefox
        and IE for a moment: in that context, Firefox looks rock-solid. On the
        other hand, I clicked on a link about half an hour or so ago and the
        entire program VANISHED, taking a dozen or so web pages in tabs with
        it. I hope none of that was important. This, of course, would account
        for how unstable Windows itself is. It is, after all, a gigantic,
        monolithic GUI application tied together with other gigantic,
        monolithic GUI apps.
      2. Windows Mobility 2003 is friggin’ ridiculous. Here’s this GUI
        desktop-on-a-palmtop miniature OS, with no less than four different
        built-in wireless networking options, and it includes no functionality
        for network browsing. None. I had to find this out the hard way,
        futzing with it, reading documentation, finding more documentation to
        read, and eventually calling the vendor of the device to ask them about
        it. Finally, what I found out is that in order to access any network
        resource you need to connect to it by direct ActiveSync, then (if you
        still feel like it) by ActiveSync over the wireless network, or by
        setting up Windows Terminal Services to allow you to directly connect
        to one particular Windows machine. Even accessing the Web requires
        something like that, as you need to specify a proxy server! This is
        insane. Seriously. What good is this crap? The real joy of all this is
        that the only reason I found out about WM2k3’s shortcomings is that I
        have to somehow get PerlCE installed on this thing and write a network
        client for an inventory tracking system. More luck: I don’t have an
        ActiveSync cradle for the device.
    • #3185012

      the life of a professional enthusiast

      by apotheon ·

      In reply to bITs and blogs

      Yesterday ? Tuesday ? I got a call from the organizer for the local
      LUG meetings that occur on the second Tuesday of every month. He had
      two things to ask of me.

      The first was whether I’d be willing to run the meeting that night.
      He wouldn’t be able to make it, and wanted me to fill in. I pondered
      for a moment, and assented.

      The second was whether I had enough time to take on a part-time
      network admin job in addition to everything else I’m doing. I’d put in
      about twenty hours a week, fairly flexible hours, during normal
      business hours. Four or five hours a day, four or five days a week. He
      told me I was on his short list of people to recommend at a company
      that needed such help, since he’d no longer be available to fill those
      needs for that company himself. I assured him that, yes, I’d be able to
      do that.

      I went on to the LUG meeting, bringing two people with me who’d
      never attended meetings for this LUG (one of whom was an occasional
      Linux user who mostly stuck with Windows these days for reasons of
      specific application needs as a musician, and the other of whom is
      looking into migrating from Windows). All told, the meeting was a
      success, with me at the helm. I heard only good things about my
      handling, including an impromptu presentation on the installation and
      use of the centericq multiprotocol IM client, and I know for sure that
      at least some of those accolades were entirely honest. Judging by what
      I’ve seen going on with the usual meeting organizer’s life lately, I’m
      guessing he might be sorta grooming me as a candidate to replace him as
      the organizer for the meetings. If that’s the case, there’s an outside
      possibility I may end up running LUG meetings, almost entirely by
      accident ? assuming I don’t run screaming the other way.

      The potential job situation is an interesting proposition for me.
      It’d mean I’d essentially be working part-time billable hours with
      three distinct employers. One is the Wikimedia Foundation, one is an IT
      consultancy, and one is a middling-sized corporation. I might even end
      up with a startlingly large payday situation, and three simultaneous
      employers, if I’m not careful. Add to this the fact that my IT
      consultancy boss is moonlighting as the vice president of projects at
      an RFID services consultancy, and he might have me get on-board there
      as well, I might end up in middle-management in a fourth employer’s pay
      as well. Where ever shall I find the time?

      Interestingly enough, if I end up with all these jobs at the same
      time, I’d probably end up making the most at the RFID position, as what
      I’d probably end up doing is lead development on programming projects,
      most of whose development work would be contracted out with me
      coordinating and writing the glue code that ties it all together. Thus,
      an odd sort of middle-management, but no pointy-haired boss by any
      stretch.

      All four of those positions, as well as the possibility of running
      LUG meetings, arise from my enthusiasm for and knowledge of Linux
      systems, at least in significant part. There’s also some
      platform-inedependent programming and Windows expertise involved in
      some of these positions, to varying degrees, but the one unifying
      factor is Linux.

      Don’t let anyone tell you that Microsoft is the key to your IT
      career. I’m Microsoft certified and, of five potential rewarding work
      experiences (two of which are already actualities, and a third of which
      is rewarding for reasons that don’t involve pay), not one has anything
      to do with my Microsoft certs. Frankly, nobody gives a rotten fig about
      my MS certs.

      Once in a while, it’s good being me.

      By the way, I do intend eventually to continue my survey of the
      characteristics of OS design, and I do intend to start the next
      installment with a reference to kernel modularity, but there’ve been a
      lot of things on my plate lately (in case it wasn’t obvious).

    • #3190239

      Understanding OSes: Kernel Modularity

      by apotheon ·

      In reply to bITs and blogs

      This is part two of the Understanding OSes series. Part one was Understanding OSes: Booting. Familiarity with the subject matter of part one is recommended before reading part two. Find collected links for this series at the Table of Contents.

      At the center of every modern operating system is the kernel. The
      kernel is, in essence, the parent program of all other programs that
      make up the totality of your OS and computing environment as a whole.
      The kernel, however, is not all one piece of homogenous process, at
      least in theory, and the degree to which it is separable into distinct
      parts is called “modularity”. What modularity means changes from one
      kernel design to another: in the Linux kernel, a module might the
      device driver for your wireless networking card, while in the Hurd
      kernel, a module might be even more fundamental, such as the operating
      system’s interrupt request handling functionality. In the course of
      this article, I’ll address kernel modularity by describing the
      modularity characteristics of several major types of kernel design:
      highly modular microkernel, two-part microkernel, monolithic kernel,
      and megalithic kernel. I’ll do this primarily by examples, though in
      some cases my examples will explain the types by contrast as much as by
      demonstration of the type.

      The most modular OS design philosophy going in theoretical OS design
      circles utilizes what is known as a “microkernel” with sort of a cloud
      of modular parts floating around it, ready to be attached to and
      detached from the microkernel itself to provide functionality if, and
      only when, they are needed. This is meant to provide increased
      security, because no more kernel is present than is needed to be in
      operation at any time, thus reducing the amount of possibly compromised
      code. This is meant to provide increased stability, because the
      complexity of the current running system is reduced when no more than
      exactly what is needed is actually running. This is meant to provide
      increased performance, as there is less kernel to be loaded into memory
      and to demand the attention of the processor than if everything is
      loaded all the time. All of this is how a microkernel is supposed to
      make your life better, in theory. As we all should know, however, in
      theory there is no difference between theory and practice, and in
      practice there is a difference. Make a note of that principle of theory
      and practice: I use it a lot.

      Unless and until microkernel development makes sudden leaps forward
      in perfection of its principles of implementation, we’re going to tend
      to find many of the intended effects of microkernel design being
      reversed. They perform like dogs because interfaces between the modular
      chunks of the kernel (the microkernel and its cloud of extensions)
      require communication: by analogy, compare being able to pass thoughts
      around in your brain with being able to discuss those thoughts with
      another human being. Their stability is good, as far as the microkernel
      itself is concerned, for exactly the reasons it’s theorized it should
      be good, but the stability for the whole system is reduced by virtue of
      the fact that increased complexity of communication across software
      interfaces creates more opportunity for errors and failures to occur,
      possibly causing big chunks of the system to fail if something goes
      wrong. As far as anyone is aware (or so I’m given to understand),
      security operates pretty much as advertised, though there are those who
      are skeptical of the claim of increased security as well. I suppose, in
      the end, that if the “perfect” microkernel OS were written, it would be
      more stable, secure, and quick on the uptake than everything else
      going, but if the “perfect” monolithic kernel were written as well, it
      too would be perfectly secure, stable, and fast, so one begins to
      wonder where the escalation of hypotheticals might end.

      There is exactly one commonly available true microkernel OS in
      current development, so far as I’m aware. That OS is GNU/Hurd. It’s
      still not, as of this writing, in ready-for-release form, but it’s
      operational. From what I understand, it’s slow as molasses in
      comparison with GNU/Linux system performance, but operational. I’ll be
      keeping an eye on its development to see where it’s going.

      There is also a commonly available pseudo-microkernel in current
      development that might surprise you. That kernel is called Darwin, and
      it’s the heart of MacOS X. It is a very nearly “true” microkernel, but
      that of course is shot down the tubes as far as microkernel theory is
      concerned by the way Apple makes use of the Darwin kernel for the basis
      of MacOS X. You see, MacOS X is built as a layered-on monolithic
      kernel, with Darwin simply acting as the core of it. Ultimately, MacOS
      X seems to be designed as a bilithic kernel, if I’ll be allowed the
      neologism, which operates pretty much indistinguishably from the way a
      monolithic kernel operates.

      The next step in our odyssey from microkernel to megalithic kernel
      is the modular monolithic kernel. This is a somewhat effective attempt
      to wed some of the better facets of microkernel and monolithic kernel
      design into one single kernel design standard. The current canonical
      example of this is the Linux kernel. The way the Linux kernel works is
      largely indistinguishable from the way a normal monolithic kernel
      works, as viewed from outside, and thus it enjoys the performance
      benefits of a mostly unified operating system. Essentially, a true
      monolithic kernel makes everything that is strictly part of the
      necessary OS of a computer system part of the kernel, which does not
      include the user interface. This is normally only possible by knowing
      before you compile your kernel exactly what hardware you’ll be using,
      and designing the kernel to suit that hardware, then compiling it.
      There are ways around this, and I don’t pretend to really know anything
      about them excepting the way Linux does things, so I won’t comment
      except on the Linux modular monolithic kernel design.

      Within the Linux
      kernel, there is a set of module support interfaces that allow for
      kernel modules to be loaded and unloaded as needed. Linux notably uses
      kernel modules for device drivers, among other purposes. Because of the
      tight integration of modules with the kernel, there is minimal
      performance loss in the communication between modules and kernel,
      though when you compile your kernel you need to “deactivate” support
      for modules you know you won’t use if you want greater performance
      benefits possible through absence of those modules. For those who are
      familiar with Linux distributions, this is (at least theoretically) a
      performance benefit of using Linux From Scratch, Debian From Scratch,
      Gentoo, or really any custom-compiled kernel, assuming you are willing
      (and knowledgeable enough) to configure the kernel to suit. All else
      being equal, then, the combination of features of modularity and
      monolithic kernel design contributes greatly to the performance of
      Linux, and helps to explain how a Pentium II 366MHz Thinkpad running
      Linux that I own consistently outperforms an Athlon XP 1600+ desktop
      system running Windows XP Pro that I also own.

      This brings us to the red-headed stepchild of kernel modularity,
      Windows. There have, over the years, been aspects of Windows kernel
      design that make arguments possible claiming that it is a microkernel
      system, a monolithic kernel system, a bilithic kernel system, or a
      megalithic kernel system (the last being a system that includes even
      such normally application-level software as the user interface in the
      kernel itself). Ultimately, I suppose the way to describe the somewhat
      schizophrenic history of the Windows kernel is as a modular megalithic
      kernel. From top to bottom, the OS itself is made up of separate, but
      indivisible, parts. For instance, the user interface of Windows XP is
      indeed a separate set of code from the rest of the kernel (and is,
      itself, several separate programs interoperating intimately), including
      the legendary OS-integrated Explorer rendering engine used to make much
      of the user interface possible, but the OS itself literally will not
      start and operate properly without all major modular parts of the
      entire vertically integrated stack of parts running. If the UI goes
      down, the entire operating system is essentially inoperable unless and
      until the UI starts working again.

      Technically speaking, of course, the
      higher-level OS components are not part of the kernel, but there is
      such an indistinct dividing line between parts of the operating system
      that sometimes it can be a little hard to tell where the kernel ends
      and everything else begins. Despite initiatives at Microsoft over the
      years intended to implement good design principles in kernel design,
      they have largely been sabotaged by the necessities of interfacing the
      kernel with the rest of the operating system. The part of the OS that
      is officially identified as the kernel by Microsoft, its central parent
      process, is actually not a bad design, at all, in theory. In practice,
      it is the axle on which the Windows wheel turns, and a stable axle
      won’t smooth the ride without some rubber between the wheel and the
      pavement. The reasons for the evolution of the Windows system
      architecture are manifold, and will be the subject of a later addition
      to the Understanding OSes series.

      • #3188545

        Understanding OSes: Kernel Modularity

        by jmgarvin ·

        In reply to Understanding OSes: Kernel Modularity

        I’m looking forward to more exokernels on the market.  While I
        think they are a little on the kludgy side, I also thing it is bringing
        the best of modularity and the best of the efficient structure of
        current monolithic kernels together.

        What are you thoughts apotheon?

    • #3189971

      Windows->Linux: Introduction

      by apotheon ·

      In reply to bITs and blogs

      I keep seeing people asking for advice relating to getting started
      in Linux, and the advice they get is often quite disorganized, in large
      part because of where they’re asking. For instance: jumping into a
      distro-nonspecific discussion forum such as TR, and asking for what’s
      the “best” distribution of Linux to use, is likely to elicit sixteen
      answers from a dozen people, followed by a whole lot of debate,
      sometimes highly technical and other times full of crap. Similar
      problems arise with other pieces of advice for which a putative
      penguinista might ask, and really it helps to find a clear, coherent
      discussion of the options, along with advice on how to accomplish what
      you’re aiming to do that is similarly clear and coherent.

      That’s what I aim to do with this. Before I get anywhere near the
      end of my Understanding OSes series (and I may never finish ? I’ll
      probably come up with ever-more stuff about which to write as I keep
      learning more), I’m declaring the intent to begin a new series. Welcome
      to the Windows->Linux series.

      I’ll probably start with some cautionary statements about if/when
      you should make the leap from Windows to Linux, what to expect, and how
      to get help when you need it. That’s even more fundamentally important
      than your choice of distro, since it’ll help you out with everything
      you do in your transition from one OS to another.

      Table of Contents:

      1. Windows->Linux: To Migrate or Not To Migrate
      • #3189183

        Windows->Linux: Introduction

        by charliespencer ·

        In reply to Windows->Linux: Introduction

        I look forward to reading your posts on this topic.  I’ve found
        your previous posts and commments to be well reasoned and equally well
        expressed.

      • #3195051

        Windows->Linux: Introduction

        by p_jones79 ·

        In reply to Windows->Linux: Introduction

        I too have been trying to find answers to many linux vs windows
        questions for some time, and have found similar forums and web sites to
        what you have descibed.
        I look forward to reading more on the subject, as a student of MCSA/E,
        and just completed my compTIA A+ i am trying to get a good grip on what
        skils are needed/wanted to succeed in this industry.
        I’m sure if I had more time I could come to my own conclusions, but
        thats the point of sites like these i suppose, and i love you all for
        it 🙂

    • #3176259

      The Future of IT

      by apotheon ·

      In reply to bITs and blogs

      [The text of this post was copied from an article of the same name at @political.]

      In the here and now, the domestic US information technology industry is in trouble. Sales aren’t as brisk as they once were, and the employment market is in the toilet. There are moments of optimism here and there, but when you turn back around you always find a dozen IT industry professionals out of work and wondering where the industry went. Meanwhile, Microsoft is pressing for more H-1B work visas so they can hire more foreign nationals domestically, claiming there just aren’t enough qualified IT professionals available.

      The commoditization of software is what inevitably led to this state of affairs. Microsoft essentially spearheaded the commoditization of software in the US, turning OSes into prepackaged “products” sold by unit. Microsoft is often credited with “revolutionizing” the industry in this manner, making it into a powerhouse by virtue of the commoditization of software, but the truth of the matter is that technology advances as it’s needed, and it would have happened whether the software was sold as product units or not. In any case, this commodity profit model was only possible by strictly enforcing copyright law in a manner advantageous for maintaining Microsoft’s revenue streams, and continued to be profitable only by lobbying for ever-stricter copyright law in Microsoft’s favor. It’s not only Microsoft that has done this, of course: every large proprietary software vendor has had a hand in it, but it’s Microsoft that led the charge, and continues to lead it.

      By making it a product rather than treating development as a service to the customer base (which would of course make Microsoft in current form entirely nonviable), these corporations have created a situation wherein software development costs can be kept to a minimum to provide greater profit margins in a “finished product” format. When it’s a sealed-up “product”, a program can be 98% old code and sold as something entirely new, it can be assembled from “parts” that are developed by anyone anywhere that simply comply with certain external behavioral standards, and can be made functional and featureful without the actual code being of any particular level of quality. As a result, close control over code quality isn’t needed, and offshoring becomes entirely viable as a development model.

      The copyrighted code is sold in executable form on a CD, and you are required by law to not make any copies of it except as possible “backups” in case the original is damaged. You are not allowed by law to install it on more than one computer at a time, and depending on circumstances you may not be legally allowed to install it on more than one computer ever. This maximizes profits, as the applications and OSes sold cost effectively nothing to reproduce in bulk quantities for sale, and with ever-cheaper offshoring development available, and the cost of transmitting finished code electronically being effectively zero, overhead for corporations like Microsoft gets smaller and smaller. As surmised in the linked article, the reason H-1B visas are desirable to Microsoft has nothing to do with not being able to find engineers. One of the major reasons for it is that Microsoft doesn’t need to pay visa workers as much as domestic workers, so they can hire people here on a work visa to do the few things that simply cannot be done overseas and transmitted electronically, such as local project management.

      All of this comes together to create a situation wherein computer scientists are in decreasing demand in the domestic marketplace, all while software vendors are telling the media that offshoring is reversing and domestic hiring will increase so that they can keep their customer base active. There’s the first problem that rears its ugly head for software vendors: as the technical crowd is put out of work, they stop spending money on software. The very people they’re putting out of work are a significant percentage of their customer base. In essence, the domestic software market is cannibalizing itself because of the consequences of a commoditized software profit model.

      Linux has the ability to be the spearhead for the cure to these ills, to mix a metaphor. Open source software development, with Linux as its current poster child, short-circuits profit models that rely on software as a commodity. It emphasizes the value of development, and de-emphasizes the value of corporate bureaucracy, packaging, and marketing dollars. In short, it puts potential revenue streams back in the hands of individuals, and takes them out of the metaphorical hands of corporations. It emphasizes the value of the developer and the software support provider. It favors many small companies rather than a few multinational corporations, and demands local access to development and support talent.

      If the law doesn’t shift to exclude the growing influence of free/libre/open source software, FLOSS may just become the revitalizing influence needed by the domestic IT industry. Of course, corporations have strong lobbies, and FLOSS has almost no lobby at all, so there’s no guarantee the law won’t change tomorrow to make a domestic industry revitalization of that sort effectively impossible. We’ll see.

      As for me, I composed this article on a Linux machine.

      • #3176220

        The Future of IT

        by jmgarvin ·

        In reply to The Future of IT

        Part of the problem with the IT industry is the doom and gloom. 
        HR doesn’t understand IT (and never will) and corporate culture wants
        IT to be something it is not and never can be.

        What IT needs is an image face lift.  Forget the certs, forget the
        education, forget the experience, just focus on WHY the typical IT
        worker looks bad to corporate america.
        1) We are practicle.  We don’t drive a Porsche, even though we might make $100k/year
        2) We aren’t socially apt.  Ok, some of us are, but many don’t
        have the soft skills to function like corporations want us to
        3) We don’t explain IT.  It is all a black box…
        4) Technology should just work.  There shouldn’t be any of this
        “there will be downtime from 8am – 5pm for system upgrades” crap that
        is floating around…If a system outage is needed, explain to your
        users WHY you need to do it and WHY it will take so long.

        Much of IT is considered a service, but that isn’t quite right…it is something of kung fu mixed with technical knowledge.

      • #3176109

        The Future of IT

        by Jay Garmon ·

        In reply to The Future of IT

        A lot of this is fiscal. Development requires paying talent, and talent
        costs money. Finidng ways to distribute and amortize those costs over
        the widest possible consumer base is what drive the whole wagon. That’s
        why software development companies rather than in-house software
        development is the norm, and that’s why DRM is such a touchy subject,
        because any crack in tight copy control dilutes the amortization curve.

        It’s also why outsourcing is so huge, because it drops the talent price, plain and simple.

        Also, to jmgarvin’s point, the typical IT consumer (read: non-techie)
        wants to use information technology the same way he drives a car–with
        minimal undertsanding of the basic drive mechanism, minimal maintenance
        responsibilities, and high degree of certainty that the product will
        operate relaible 99.999% of the time.

        To me, the answer here is to lean into all this, rather than fight it.
        Loathe though I am to ever say IBM is insightful, their notion of
        relying on IT services rather than products is a wise one. Enterprise
        conusmers want a specified service level at a specified cost, and they
        don’t truly care what hardware or software is needed to run it, any
        more than I care what servers Google uses to build their search engine.
        I want good search results, and whatever Google does to deliver that
        quality of service is OK with me.

        Embracing utility IT is inevitable, especially when it will place the
        quality of servcie at the forefront of the IT equation, and it will
        break the software megaliths’ stranglehold on feature bundling. I for
        one can’t wait.

    • #3186515

      following up

      by apotheon ·

      In reply to bITs and blogs

      As I’ve said in an early post in this blog, I’m not planning to
      respond to any comments posted under my entries directly. That doesn’t
      mean nobody should post their comments, however: the only reason I’m
      not posting replies as comments of my own is that I figure I should let
      my words stand for themselves, and if something really warrants a
      statement from me it’s probably worthy of its own blog entry.

      This is one of those entries.

      exokernels:

      First of all, TR member jmgarvin
      expressed some curiosity about my thoughts on exokernel design. For
      those of you who are not familiar with the concept of the exokernel,
      it’s essentially an OS design architecture where all the traditional
      operating system functionality outside of “hardware multiplexing”
      (read: hardware abstraction and management) is delegated to the
      application level. In practice, where exokernel systems
      are being implemented, this means that program library frameworks that
      perform the interface functionality normally associated with the OS are
      used as a basis for higher-level applications, external to the exokernel itself. For instance, in the the
      MIT model of an exokernel OS,
      there’s a set of unixy environment libraries called ExOS, on top of
      which Emacs runs. This allows both increased system modularity for
      greater stability and greater customizability of the user interface
      environment for applications. In theory, it sounds like a good idea.

      Unfortunately, this is similar in many ways to the theoretical basis
      for the microkernel, taken to an absurd degree. The potential arises
      for a great many problems with resource consumption, conflicting
      resource management at the application level, reduced performance due
      to increased API activity, and so on. This sort of OS architecture is
      probably better suited to the lab than real-world application. It may
      some day mature enough to be useful, but if it were completed and
      released tomorrow it would probably be looked at as largely useless
      unless you wanted to run a Windows-like system with greater kernel
      stability and security. One might conceivably do something similar to
      colinux (side-by-side OS installations, simultaneously running), but
      the useful niche for such a thing is vanishingly small at present. Then
      again, I’m no expert when it comes to exokernel concepts.

      IT image:

      In an earlier comment, jmgarvin (again) observed that the IT
      industry has an image problem ? or, more to the point, several image
      problems, as follows:

      1. IT professionals take a utilitarian approach to things that
        corporate culture views as being status-related. We don’t, for
        instance, tend to run right out and get a Porsche to declare status
        when we start making $100k per year, and we tend to be prone to
        affectations like ponytails and pocket protectors.
      2. IT professionals typically don’t interact with colleagues the same
        way business grads do. We don’t operate with the slick veneer of
        salesmen, for instance, and schmooze with the VPs at office parties. IT
        pros are often most comfortable when left the heck alone and allowed to
        work long, hard hours in peace.
      3. Information Technology is a mystery to “outsiders”, including the
        people that pay the IT bills. They don’t want to know the details, only
        that it works, and at the same time they feel left out of the loop when
        IT professionals don’t tell them the details (which is generally as
        close to “always” as possible). Tell me about the last time you had to
        explain why a routing table was lost and how much you enjoyed trying to
        translate it into terms your nitwit pointy-haired boss could understand.
      4. When IT resources go down, the lack of willingness to understand it
        leads to frustration. IT workers get blamed immediately, even when it’s
        someone else’s policies that led to the downtime. Necessary Windows
        system restarts become the IT manager’s fault at the next performance
        review. Computers, to these non-IT workers, should “just work”, and if
        it doesn’t they need someone to blame.

      The above is a paraphrase, with a bit of my own interpretation and
      explanation stirred in. Don’t blame jmgarvin for any disagreement you
      may have with my rewrite.

      utility IT:

      The explanation The Trivia Geek
      offered of how “utility IT” is the inevitable future of IT makes some
      very good points. Ultimately, businesses are going to want their IT
      infrastructure to “just work”, and won’t care so much about what is
      being implemented to achieve that. As he noted, IBM is definitely
      taking this direction, throwing out product names largely as marketing
      strategy. IBM’s movers and shakers are fully cognizant of the fact
      that, ultimately, business owners don’t generally care about whether
      you’re using Windows, Linux, or OS/400, as long as the business is as
      profitable as it can be. The IT industry service-based profit model
      looms large on our horizon.

      That sounds good to me.

      The Trivia Geek mentioned another matter, tangential to that. He
      mentioned outsourcing as a means to save money. Open source software is
      the answer to much of that: when your software development is free, you
      don’t need to pay someone to do it for you, even at severely reduced
      rates. Such software will need implementation expertise,
      modification/customization, and so on, however. Thus, the development-
      and support-as-a-profit-model evolves.

    • #3051541

      Windows->Linux: To Migrate or Not To Migrate

      by apotheon ·

      In reply to bITs and blogs

      This is Part One of my Windows->Linux series, wherein I’ll be
      discussing the issues surrounding migration from Windows to Linux. The
      series started with Part Zero, the Introduction.

      Any time you’re looking at your network and thinking “Woah, this
      kinda sucks,” you’re going to start thinking about changes. Such
      changes might involve altering trouble ticketing procedures in a
      business environment, hardware upgrades, new projects for a home-based
      network, tightened system security profiles, or even migrating to a new
      operating system for many, if not all, of your herd of boxen. Having
      been following the news lately (You have been following IT news,
      right?), you might have noted that a lot of people are talking about
      Linux as a viable replacement for Windows in anything and everything up
      to the server infrastructure in a medium-sized enterprise (since nobody
      in his right mind runs Windows as the primary server infrastructure in
      the biggest enterprise networks).

      Maybe Linux is for you. After all, if GM, Autozone, the city of
      Munich, Industrial Light and Magic, and countless others can have
      staggering successes and make large-scale migrations, it stands to
      reason that you sure as heck can. Even better, Linux is free!

      Wait a minute. Hold on there, Tex. What you do on your home network
      is on you, but if you’re the IT manager and you’re looking at a Windows
      network and thinking “This all has to go,” you may be right ? or you
      may be stepping off the edge into a deep, dark hole, out of which your
      career may never climb. There’s a lot to consider in planning a
      platform migration in a network, even if it’s a small one. In fact,
      even stand-alone systems can provide significant challenges in moving
      from one OS to another.

      First of all, you need to examine your network or system needs and
      determine whether any migration at all is a good idea. A number of
      important factors come to light. For instance, if you are doing a lot
      of computer gaming and aren’t interested in spending time figuring out
      how to make World of Warcraft work beyond putting the CD in the drive
      and clicking on “OK” a lot, you should stick with Windows for that.
      While pretty much any application or server available on Windows has a
      counterpart (or two, or fifty) that runs like gangbusters on Linux, the
      applications are often different from those available for Windows.
      While this isn’t in and of itself a reason to discard Linux as an OS
      migration target, you need to be aware of the features and
      functionality provided by the applications you’re currently using, and
      how they stack up against those of similar applications on the platform
      to which you’re migrating. Even if it has more features and
      functionality, there’s no guarantee that the same type of application
      on Linux will have all the same features and functionality of a Windows
      counterpart as part of its own list of characteristics, and if some
      feature is business-critical (such as compatibility with a particular
      customer’s or upstream distributor’s software), you might want to
      rethink abandoning the Windows platform.

      For about 98% of networks, however, this isn’t really a concern:
      most people don’t use anything in Windows that can’t be done with the
      lineup of freely available software in Linux. Even games are becoming
      increasingly portable between OSes, thanks to the efforts of the
      developers for projects like WINE and Cedega.

      Another important point to consider, of course, is support costs.
      The Microsoft marketing department is always happy to point out that
      Linux has higher support costs than Windows. Sadly, they’re full of it,
      but that doesn’t mean that you shouldn’t look into support costs for
      the Linux systems as a possible sticking point. Increased costs is the
      exception, and not the rule, but that doesn’t mean it’s not possible.
      Keep in mind that Linux systems do still need to be managed by a
      competent administrator, just as Windows systems (and Solaris systems
      and OS/400 systems and OS/2 systems and BeOS systems and MacOS X
      systems and so on) do. Typically, you’ll need fewer admins for a Linux
      network than a Windows network, in part because of reduced
      administrative overhead and in part because often less systems are
      required for the same network size and operations, but that number of
      needed administrators never quite hits zero. There’s also the simple
      fact that if you’re a Windows guy who knows nothing about Linux he
      didn’t read in a trade rag, you’ll have to learn a lot more before it’s
      a good idea to migrate everything to Linux, particularly if you’re the
      only administrator ? or you’ll have to hire someone else to manage your
      network, which might mean one more employee than you had before.

      It’s not nearly as difficult to find affordable Linux sysadmins as
      some (such as Microsoft) would have you believe, though. Take heart. If
      you need someone that can competently manage a Linux network, though,
      you need to adjust your hiring expectations slightly. If you want
      someone with Windows credibility on his resume, it’s a simple task to
      find one in the CS department of the local university. Just walk into
      some of the senior-level undergraduate CompSci classes and swing a dead
      cat: you’ll hit about thirty or so. The reason for this is that most of
      these guys aren’t actually competent sysadmins, even with
      Windows. These are people whose only credible sysadmin knowledge
      usually comes from formal education. There are exceptions, but
      Microsoft certifications aren’t a reliable indicator either (take it
      from me: I have two MS certs, and they’re not the reason I’m a good
      Windows admin). If you want someone good at the job of system
      administration, you need someone who’s enthusiastic, and there’s
      nowhere better on the planet, as far as I’m aware, to find enthusiastic
      sysadmins than at a good Linux User Group meeting, and you might even
      be surprised by the people you find at a LUG meeting who are not only
      competent Linux sysadmins, but competent Windows sysadmins as well,
      whether they have degrees or not (and many do). So, be aware that you
      may have to hire someone new, and/or you may have to learn new skills,
      but you don’t necessarily have to have a hard time finding this
      hypothetical new employee for a reasonable salary or wage.

      It used to be true that between Linux and Windows there was a huge
      gap in quality of hardware support with Windows as the clear winner.
      That’s no longer the case. In some ways, Windows hardware support is
      better, and in some ways Linux hardware support is better. We’re
      unlikely to see either become a clear winner in that arena again,
      unless Windows goes the way of the dodo due to market realities or open
      source software development ends up getting outlawed at the behest of
      lobbyists for the proprietary software vendors. You still need to
      consider hardware compatibility issues when you deliberate over whether
      to migrate from Windows to Linux, though.

      Test Linux on the hardware you intend to use before committing to
      using it. You’ll have to ensure you iron out any wrinkles in your
      understanding of how Linux and Windows hardware compatibility differ
      from one another, and you’ll want to be sure that Linux supports
      everything fully. Windows support for some hardware is effectively
      nonexistent, as anyone doing the sort of consulting work I’ve been
      doing for the last couple years should know quite well, and Linux can
      have such hardware support issues as well ? though Linux can still get
      something of an unfairly won bad reputation when people try installing
      it on a system they bought with Windows pre-loaded. Clearly, if you’re
      buying a system with an OS already installed on it, the hardware vendor
      is unlikely to have sold you a system that your OS won’t support, but
      if you then go installing a completely different OS on it you might
      have severe problems. For example, Linux support for Mac hardware is
      not as good as that of MacOS X, but it’s a darn sight better than
      Windows support for that hardware (which won’t even install). In short,
      the moral of the story is that you have to ensure you’ve got the right
      hardware for your OS before you rely on that OS working properly on
      that hardware. Double-check that and, if there are problems, you need
      to decide whether you can “fix” them, whether you can afford to replace
      the offending hardware, and whether you might rather just stay with
      Windows.

      Consider who you’re employing (or who is getting employed alongside
      you), and for what purposes you’re thinking of using Linux. Linux on
      the desktop has come a long way, and is usually every bit as good as
      Windows as a desktop system. Sometimes it falls well short of Windows,
      and sometimes it makes Windows look like a child’s broken toy. It all
      depends on who’s using the computer, and for what purposes they’re
      using it. If you’re putting computers in front of accountants, you
      should really consider sticking with Windows (at least for the
      bean-counters), particularly if you have a high turnover rate for
      accountants. There is good accounting software for Linux, but it’s one
      area in which Linux-compatible software can lag, and it’s definitely an
      area where the people with the skills you need are almost invariably
      going to be permanently wedded to their Windows-based applications (at
      least for the time being). On the other hand, if you’re dealing with
      engineers in bleeding-edge industries, you’re probably going to see a
      lot of people who refuse to use Windows, which just won’t compare
      favorably with the various workstation-level unices available.

      Finally ? and I can’t stress this enough ? if you are, yourself, not
      conversant enough in Linux to find this explanation to be completely
      unnecessary, because you already know all of it and already have the
      necessary calculations and resources in place, you need someone to help
      you analyze the situation and make decisions. Make sure it’s someone
      that knows both Linux and Windows, and that it’s someone that doesn’t
      have any short-term vested financial interest in either platform.
      Conflicts of interest are suboptimal, at best.

      • #3050441

        Windows->Linux: To Migrate or Not To Migrate

        by schuylkill ·

        In reply to Windows->Linux: To Migrate or Not To Migrate

        I just wanted to say that I appreciate your effort on this topic.  I am a Windows sysadmin with an interest in learning Linux.  Your first columns on the subject of a Windows to Linux conversion show that you are not a zealot for either side, which is an attitude I can appreciate (being an ex-NetWare admin myself).

        Please continue with this wonderful, useful series.  Thank you!

      • #3050432

        Windows->Linux: To Migrate or Not To Migrate

        by pweegar1 ·

        In reply to Windows->Linux: To Migrate or Not To Migrate

        I’m also a Windows sysadmin for a small, but extremely important division where I work. We are the Corp Records department. As you can imagine, we store the companies records, in both physcial form and on an optical storage device.

         

        As part of our record keeping we have developed a foxpro tool that tracks boxes in our various wharehouses. At one time we tried storing the vfp tables on a Linux box using SAMBA. Unfortunetly no one in the company knew SAMBA well enough to make it work. Which means no record locking and data corruption.  Therefore linux is not an alternative to be taken lightly.  Then there is the hardware. We run 2 juke boxes that store our scanned files. They are run under Windows.  Again, Linux is no alternative.

        And the scanning software itself is run under Windows. IF there is a Linux version available it would be expensive, both in terms of licensing, and conversion and user training.  And our scanning department are no where near interested in learning a new OS.  Most have a difficult enough time just turning the power on and logging in.

         

        My point here is that while linux is an alternative to Windows, there are many negatives that need to be dealt with. While the OS may be free, you still need support staff that IS compentant (programmers, network ppl, help desk, etc).  Are you going to terminate current staff just to bring in a new OS? Training costs are expensive, regardless of the OS. Conversions can be expensive and nightmarish (I’m going thru a server migration as we speak. And it’s bene nothing BUT a nightmare. The vendor never thought customers would upgrade both OS and server hardware at the same time, I guess. They have only just recently brought out a migration utility.)

        And finally, while Windows may be bug ridden and have security holes large enough to drive a fleet of 18 wheelers through, Linux doesn’t escape being bug free and ha had it’s own security issues in the past.

         

      • #3047505

        Windows->Linux: To Migrate or Not To Migrate

        by archie_cunanan ·

        In reply to Windows->Linux: To Migrate or Not To Migrate

        First of all  please continue this series ….   I’m an IT Manager from a retail company.  The TCO for Microsoft Licensing have increase dramatically in past years, this year alone it almost eat-up half of budget.  This prompted our management team to consider an alternative OS, as our long term strategy.   Last  year we started  using Linux on our back-end ,for our internet gateway and  firewall that resulted from a huge savings.  The savings was due to comparison from using Windows platform gateway and firewall or even net appliances.  This encouraging results made us think of actively considering migration of our network to Linux platform. 

        In addition, our principal office requires that we have our  WEB under  Linux, Apache, MySQL, and PHP platform.  In our case we really have to re-think our IT strategy and start creating plans for the migration. This eventhough all of our IT guys are educated, trained and experience under Microsoft platform.  

        Therefore, we on the IT department, the management and our users must swallow the bitter pill for our company survival.

         

         

          

      • #3047852

        Windows->Linux: To Migrate or Not To Migrate

        by deadly ernest ·

        In reply to Windows->Linux: To Migrate or Not To Migrate

        Had some trouble with posting a comment earlier so please excuse me if this ends up being posted twice.
         
        I have always believed in ‘horses for courses’ and agree that some situation are better off staying with Windows. Your non partisan approach will make it easier for many to understand the issues as you have put in words many things I have tried to present to others in the past – both good and bad.
         
        In a previous job I worked as a sys admin on a high security network and gateway – for security reasons we deliberately included some Windows boxes in the gateway. By having a layer of Windows boxes in front of Unix / Linux boxes that were monitored by IDS software on a Linux box we made it hard for people to punch their way in. If they used ways to breach the Windows boxes they would not work on the Unix boxes and vica versa. Also the Windows boxes were easier to set up to crash under certain conditions that were indicative of an intrusion. Thus the Windows systems acted as a trip wire against intruders. The network servers were mostly Linux and the desktops were mostly Windows – to suite user preferences.
         
        At home I am currently using a Windows system and am moving to Mandrake so that I can do away with having to dual boot two MS operating systems as some apps and games wont work on XP but will work on Win 98 and vica versa – my research says that they will all work in WINE. I have helped a number of small businesses convert to Linux and have even advised some to stay with MS or use a mixed systems because of user and application needs.
         
        The critical thing in this is, as you say, to conduct a business needs analysis and then an analysis of the software and hardware before even starting to do anything serious about a switch.
    • #3051537

      more about comments

      by apotheon ·

      In reply to bITs and blogs

      It has occurred to me that there’s more needed in response to
      comments to ITLOG than merely deciding whether or not to post something
      that addresses them. As such, I’ll do more:

      If you have something you specifically want to see in a later
      installment in a series, post your request to a comment to either a
      part of the series that specifically leads into that matter, or post it
      to the introductory or first post of the series. I’ll make sure I
      interlink the posts in my article series here so that you can always
      find your way back to the first, and I’ll ensure there’s a table of
      contents for them in the introductory post as well. This should make it
      easy to figure out where to post a request. I’ll also make an effort to
      check for new comments on all series posts, since there doesn’t yet
      seem to be a way to just turn on automatic notification for such
      things. Hopefully that changes as the volume of posts to ITLOG climbs.

      If you want to be informed when there’s a new addition to a series,
      make sure you post something to that effect in a comment to the first
      installment of the series. If you have a TechRepublic profile with the
      functionality allowing someone to contact you there, or if you provide
      contact information (I’ll accept email or ICQ at this time), I’ll make
      the rounds and let you know when something new for that series comes up.

      If you have any suggestions for other things I should do in response
      to comments, feel free to comment to this post with such
      recommendations or to send them directly to me by way of my profile. I
      don’t guarantee I’ll take your recommendation, but I’ll at least read
      it and consider it.

      By the way, while I’ve considered simply tagging everything I post
      with the word “apotheon”, I have decided against it. I only do so when
      the subject matter actually pertains to me, or to some project of mine,
      in some reasonable manner. Thus far in ITLOG, that has been often. The
      upshot is that while using the apotheon tag to find my posts can be
      useful, it won’t give you everything I write. This public service
      announcement was brought to you by the numbers 9 and 81, and by the
      letter J.

      Enough rambling. Stop reading this. Go back to something entertaining.

      EDIT: I’m aware that contact-listing me makes my entries show up in a convenient place, et cetera, but knowing that I never use such aggregation pages here, I figure others might not either. Thus, the email offer.

      • #3050951

        more about comments

        by beth blakely ·

        In reply to more about comments

        If folks add you to their Contacts list, they’ll see your new blog
        entries in their Posts From My Contacts section on the main Blog page.

        Members could also subscribe to your Blog’s RSS feed to receive updates easily.

        Also, if you tag each piece of a series with the series name,
        “Windows->Linux,” for example, the resulting search page will group
        them together providing a convenient URL to send a series to a member,
        etc.

      • #3066426

        more about comments

        by rtmjarhear ·

        In reply to more about comments

        I just asked why so many passwords. Thanks,, by the way you forgot to tell me your birthday..Jezzzzzzzzzzzus mother mary and  St Joe. Take two hail marys and call me in the morning.  *73-(jarhead)-77*
         

    • #3051528

      Understanding OSes: Table of Contents

      by apotheon ·

      In reply to bITs and blogs

      Because I started the Understanding OSes series just sorta off-the-cuff, I didn’t plan ahead very well. As such, the table of contents for that series comes now, after the first two installments have already been posted. Without further ado:

      1. Understanding OSes: Booting
      2. Understanding OSes: Kernel Modularity
    • #3051494

      And now, here’s Chad with the GNUs.

      by apotheon ·

      In reply to bITs and blogs

      Someone in Texas is upset about the bureaucratic inertia of school
      districts, and the tax money it’s costing him. Maybe it’s no mere
      coincidence that the spelling of Texas is so close to that of taxes. In
      any case, have a read of Pulling the Trigger on Bureaucratic Nonsense in the Blog of Helios, at the Lobby4Linux website. I wish him luck ? from afar.

      Since UnitedLinux, an attempt to get a standardized, unified Linux
      base for major enterprise-level vendors, crashed and burned due to all
      the major players jumping ship or vanishing, I’ve fairly regularly come
      across Windows devotees (let’s call ’em “zealots”, just like the
      Microsofties tend to call anyone who uses Linux a “zealot”) who
      complain that Linux has too many choices to be a good choice for a
      computing platform. Of course, I end up staring at the person, the
      computer screen, whatever, with utter disbelief and shock every time
      that so-called argument comes up: I’ve never understood the mindset
      that demands the removal of options. Since then, the Linux Core
      Consortium came into being, with the mission of producing a single
      “standard” Linux base for major enterprise-level vendors. Sound
      familiar? Yeah, me too. In any case, Ian Murdock, the founder of Debian
      GNU/Linux and now the chairman of Progeny Linux (a Linux business
      support services company), pulled Progeny out of the LCC and, along
      with several other purveyors and support companies for Debian-based
      Linux distributions, formed the DCC, or Debian Core Consortium. Now, the LCC and DCC seem to be directly competing for the head cheese of enterprise Linux standardization.

      I’m a Debian fan, myself. It’s an incredibly well-maintained and
      well-designed distribution, and it offers an extremely lean minimal
      install (which is a very, very good thing) as well as options for more
      feature-rich “default” installs, and quite probably the best software
      management toolset on the planet. It’s probably the most technically
      appropriate distribution for enterprise server implementation, and all
      it lacked to pretty much own that market is what Red Hat has in spades:
      marketing, corporate image, et cetera. DCC looks like it’ll provide
      what’s needed to make Debian-based distros collectively a major
      contender in the enterprise Linux market, and possibly the single
      biggest Linux horse in the race, measured as a collective
      distribution-space. There are some questions as to whether it will
      succeed, of course, still being a new thing ? and there are both
      benefits and dangers in the offering, in terms of what this will do for
      and to the standard Debian distribution.

      As of Tuesday this week, I got home from a trip to Colorado, where I
      had a job interview the day before. It looks like I’m moving. It also
      looks like I just became about twice as busy as I was before I made the
      trip for the interview, and I was running short on free time already.
      It’s worth it, though. I’m about as pleased as can be with this.

      Finally, here’s the text of an email I received:

      Dear Linux Professional,

      Novell Customer Communities once again is pleased to offer the free Novell
      Technical Resource Kit. With its release of Open Enterprise Server (OES),
      Novell has demonstrated a substantial commitment to the Linux platform.
      Now you can evaluate Novell software with this comprehensive 2-DVD kit.

      The Technical Resource Kit contains over 7 GB of licensed and evaluation
      software, as well as other tools and resources that will help you to
      evaluate and become familiar with the latest technologies from Novell.

      This comprehensive Tech Resource Kit includes the following:

      This Technical Resource Kit contains some of the latest products from
      Novell, including

      -Novell Open Enterprise Server
      -SUSE* LINUX Professional 9.3
      -Novell Identity Manager.

      It also includes additional tools and resources that will give you an edge
      over your competition.

      To order your free Kit, please visit
      http://www.novell.com/community/linux/order.php?sourceid=linwd1 and
      follow the instructions. Act now as supplies are limited.

      Sincerely,

      Novell Customer Communities

    • #3051822

      Language vs. Programmer vs. Tool

      by apotheon ·

      In reply to bITs and blogs

      There are a number of considerations to examine when conceiving,
      planning, and embarking upon a software development project. One of
      these is what language to use, and many other considerations affect how
      you must approach the decision. There’s a great deal of furor out there
      in the common developer mindspace over the issue of programming
      language choices ? which is to say that there are a lot of programmers,
      hackers, code monkeys, and project managers with very definite opinions
      about what the “best” programming language is, either in general or for
      a given task. Typically, those of us who wrangle code end up settling
      on one or two, or maybe even three, favorite languages. We’ll often
      defend those choices against all comers, passionately and stubbornly.
      Some of us have more fully reasoned explanations for our choices than
      others.

      It’s generally understood by many that the pointy-haired bosses of
      the world don’t know what they’re doing when choosing the technical
      requirements of a given project, and judging by the massive prevalence
      of Java in all walks of programming, even effectively using it as a
      scripting language in some cases and as a non-portable, nominally
      high-performance, platform-specific application suite development
      language in other cases. Neither of these tasks is one for which Java
      was ever suited, but it gets used in such roles, to one degree or
      another, all the time. Okay, so the pointy-haired bosses in the
      aggregate seem to be a bit cracked in the head. On the other hand,
      they’re getting their cues from somewhere.

      The big motivating factors, prompting middle managers to choose Java
      for all projects, are Sun marketing, availability of Java programmers,
      and the agreement of programmers. It’s the combination of the second
      and third of these that most concerns me here, and the third in
      particular. The second is easy to explain: the demand for Java
      programmers creates a feedback loop that prompts the creation of Java
      programmers, which in turn convinces managers to specify Java for
      projects, which prompts the creation of more Java programmers, et
      cetera, for a start. As a second consideration, educational
      institutions are teaching a lot of Java, which pretty much comes up for
      the same reasons that Java programmers get hired, thus entering
      institutes of education into the feedback loop. Recursion for everyone!

      20 GOTO 10

      Finally, we’re looking at the agreement of programmers. Yes, the
      programmers are feeding the overwhelming misapplication of Java.
      They’re doing so because they don’t know any other languages as well as
      Java, or because they’re afraid to suggest something other than Java,
      or just because it’s easier than disagreeing. They’re also doing so
      because of the availability of tools for rapid development, debugging,
      and other programming-related tasks. The more Java programmers there
      are, the more Java development-related tools there will be, both
      because the market for Java development tools is strengthened by
      programmer demand, and because with more Java programmers there are
      more people to provide input to Java development tool design. The more
      tools there are, and the better they get, the more people learn and use
      Java. Feedback loop, or potentially infinite recursion, once again.

      Paul Graham, genius hacker, self-made millionaire, and Lisp bigot
      par excellence, is very adamant about his notions of what to do when
      choosing a programming language for a project. He seems ready to
      entirely eschew all considerations of already extant development tools,
      and simply consider the language. That being the case, he urges the
      choice of the best language for the job at hand, and suggests that
      where the best programming languages are concerned, if you choose them,
      the programmers will come. Any programmer unwilling or unable to learn
      a “better” language than Java isn’t worth hiring, he seems to say. In
      fact, he may well have said it just like that: I wouldn’t put it past
      him. Frankly, I very nearly agree.

      The problem, of course, is that not all programmers are genius
      hackers. I know from personal experience that those of us with
      exceptional intelligence, skill, talent, or whatever the heck else,
      don’t tend to see ourselves as superior most of the time. Rather, we
      look around at the rest of the world and wonder why they fall short.
      I’m fully aware of that abnormal perspective for those who fall within
      the ninety-eighth percentile of some metric of exceptional
      characteristics, and fully aware that my own above-average status
      within some areas is balanced by below-average, or at least average,
      status within others. For instance, I’m a terrible, terrible cook, and
      unlikely to be able to overcome that failure: I’m culinarily
      challenged. I am, in the kitchen, developmentally disabled. Such is my
      lot. I take comfort in the fact that I learn abstracts quickly and am a
      fantastic judge of logical validity and consistency ? both of which are
      conveniently helpful in programming tasks. Lucky me.

      So, from where I’m sitting, I don’t feel particularly brilliant,
      despite extremely high test scores. Rather, I look around at the
      average run of humanity and wonder what they thought was so difficult
      about a given IQ test. How did you not successfully determine the
      number of cubic units in that illustration of a three deep, four high,
      six long stack? I’m mystified by failures to understand, but at least
      I’m aware of the disconnect. I’m aware there’s a lack of realistic
      expectation that threatens to give me inaccurate estimations of how
      easy it might be to find someone with skills similar to my own. Maybe
      Paul Graham fell victim to that sort of broken expectation, though.
      Perhaps the way to choose your language involves checking around to see
      who’s available, what they’re capable of doing, and how well the thusly
      available options would suit your needs. You can pretty much guarantee
      Java will be an option. C/C++ is another good guess. Things get less
      certain from there, and they change as current programming fads evolve.
      Right now, .NET is big. It’s really big. It’s getting bigger. It may be
      the next Java, but that doesn’t mean Java won’t be the current Java for
      quite some time.

      The real test of a language’s inherent value to programmers and
      programming projects is what the experts like twenty years or so later.
      Looked at that way, Paul Graham has a definite point: Lisp is old. It’s
      not just old; it’s REALLY OLD. Great. It’s a wonderful language. It’s
      also hard to find Lisp hackers to write the next web app. Maybe you’ll
      have to just hire Paul Graham. Genius hacker or not, though, there’s
      only one of him and if he’s the be-all and end-all, supply will so far
      outstrip demand that refusing to settle for anything else is likely to
      ensure the failure of a lot of business ventures. Fifteen years is too
      long a waiting list for beginning a software development project. Maybe
      it’s time to consider other programmers ? and, I’m afraid to say, other
      languages.

      You need a balance between language suitability and programmer availability. That balance point centers around four things:

      1. How easily can you hire a programmer that knows this language, or can learn it quickly enough that he might as well know it?

      2. How quickly and smoothly can your programmers develop in the
      given language, and how much extra time goes into tasks like debugging?

      3. How maintainable is your code when you’ve been done for six months and you now need to alter the end result?

      4. How well does the language perform for the chosen purpose?

      Well. That’s sure a can of worms all its own. Now you’re looking at
      a number of new concerns, perhaps unexpected when you embarked on this
      journey. We’ll just examine these criteria individually, now.

      First of all, finding programmers to use your langauge, or making
      them if need be, is something you’ll have to sort out on your own.
      Resume drives might be a good start. Good luck with that.

      Second, there’s the question of rapid development time. Graham’s
      major argument for Lisp centers around this, but really, the language
      itself isn’t the only concern: so too is the lineup of available tools.
      Integrated design environments and suitable prototyping languages
      (where appropriate) are often important tools, and can have a
      significant impact on development productivity. Combine the language’s
      succinctness and flexibility of code with its available tools,
      including their ease and speed of use, to find a good final result for
      rapid development capability. Java, as I’ve mentioned, is great for its
      IDEs and other comprehensive development tools. It’s not, however,
      particularly good for succinctness and flexibility. Lisp suffers the
      opposite problem: it is very succinct and flexible, and perhaps the
      most succinct and flexible to judge by Paul Graham’s take on it, but
      the development tools available for it are anemic at best, and it
      suffers some development ease for people relatively new to it because
      of all those parentheses ? if someone finds it difficult to read, it
      becomes difficult to use, thus perhaps intensifying the need for good
      development tools.

      Third, you must address maintainability in almost every instance. In
      fact, anything intended to be widely enough used that people not
      involved in the development process, at least in helping define the
      needs of it, will have the software at their fingertips should be
      considered to automatically require extreme ease of maintenance. Again,
      readability of code is important, as is the set of tools available for
      thepurpose of editing it. Another important factor is good programming
      practice, but that’s more related to finding good programmers than
      anything directly related to the language itself. Of readability and
      tools, typically readability is the most important factor. With the
      sprawling complexity of the average Java source, and the arcane,
      abnormal appearance of Lisp code by today’s standards, both have
      issues. Both also have strengths, for Lisp is very succinct, and Java
      is almost automatically self-documenting in its verbose structure.

      Finally, you’ve got to choose according to needed performance. If
      your performance bottleneck is going to be the user, you can mostly
      ignore this aspect. If not, you’ll have to look into what specific
      requirements you have for performance and which languages are best for
      that, but I rather suspect you’re going to find yourself looking at C,
      C++, and/or Objective C. Anything with an intermediary
      bytecode-compilation and a VM interpreter, any traditionally
      interpreted language, or any compile-at-runtime language is unlikely to
      perform very well, excepting under edge-case conditions.

      So, now we’re looking at the tools and the language itself. It looks
      like development tools and language structure are of real importance
      here. Ultimately, I’m very skeptical of the urge to choose a language
      for tools over succinctness of source and flexibility of the language’s
      syntax and semantic structure. The more we support languages based on
      their tools, the more we’ll end up with unsuitable languages being the
      only option for perfectly suitable development tools. Nobody develops
      for anything that nobody wants to use, all else being equal, so you
      won’t see better tools for other languages unless you use them.
      Furthermore, drag-and-drop development looks quick, but the front-end
      of development is not the whole story. As you start getting into
      debugging and feature tweaking, I suspect you’ll find that readability
      of source is becoming a bit more important than how quickly you can
      slap it together. Just getting the parts of a program in a single pile
      does not constitute rapid development, and releasing such an unrefined
      and untested pile of code as a finished product will probably prove
      disastrous. Maybe you should pick a language that is succinct and
      flexible enough to mostly make up for any lack of good development
      tools, if it comes to that. I know that would be my choice, if that’s
      the only point of evaluation.

      If you can get away with it, you should probably be using one of
      those languages that inspires debates about whether it’s properly a
      “scripting” language or a “programming” language. This includes such
      linguistic options as Perl, Python, and Ruby, among others. It
      certainly does not include VBScript, Visual Basic, Javascript, JScript,
      Bash, or even PHP. It likewise doesn’t include C, Pascal, or even Java.
      Languages like Perl, Python, and Ruby all benefit from a rich, flexible
      structure that allows for very succinct, powerful programming
      capability, coupled with easy access to source and quick implementation
      for testing because you don’t have to run long compilation and linking
      sessions to create testable executables. The major problem with
      languages such as these is simply possible performance issues. They
      even tend to be as technically portable as Java, because they can
      usually be run using their runtime compilers without preparatory
      bytecode compilation or binary compilation to produce a binary
      executable. Even if you need a language such as C, Perl serves as an
      excellent prototyping tool for C; whether Python or Ruby does so as
      well is a question I’m not prepared to answer right now.

      You can definitely find Perl and Python programmers, and once you
      have them you can turn them into Ruby programmers if need be, or turn
      Python programmers into Perl programmers and vice-versa, almost
      universally. Furthermore, competent programmers in these languages tend
      to be good at programming, particularly if they’ve learned the sort of
      lessons offered in an excellent book on programming practice called The Pragmatic Programmer, by Andrew Hunt and David Thomas. In fact, the two of them also literally “wrote the book” on Ruby when they authored Programming Ruby.
      There’s a wealth of information to be had from their writings, and if
      you can spare a week I recommend you make completing a read through the
      book The Pragmatic Programmer a requirement for your
      developers, and maybe even pay them for their time while doing so if
      you can afford it. If you’ve got people with talent and a keen grasp of
      the language for your project, The Pragmatic Programmer will cement the skill in place.

      So: tools or languages. I think tools are very tempting criteria,
      and they should certainly be a concern. Where the two options come
      into conflict, however, go with the syntax and semantic structure of the
      language itself. That should serve you well.

      Then again, maybe that’s just me.

      • #3052438

        Language vs. Programmer vs. Tool

        by jaqui ·

        In reply to Language vs. Programmer vs. Tool

        you forgot one language, that fits in with c, c++ and if still a strong language.

        Delphi.

        or, to use older naming conventions:

        objective pascal.

        if I was to be picking a language for an application where performance is a high priority, c, c++, objective c, or delphi.
        if none would fit, I would also check assembler, fortran, cobol, before moving to any scripted language.

        while perl, ruby and python  are fast for scripted languages, the
        interpreter is not installed by default in windows for any of them.
        this is a problem as the windows based interpreters do not have the
        same speed as the *x based interpreters do.
        ( something like 25% of the *x speed under windows )

        java.. what to say about the slowest scripting language available….
        I’ve seen java apps refuse to run on *x, since the “correct” vm wasn’t installed.
        ( meanwhile the newer version if the vm was )
        I’ve also seen some that won’t run with sun’s vm, they require one of
        the open source vm’s, yet that creates the issue of is that vm going to
        be installed on every machine.. if not how difficult will customers
        find getting and installing it on thier flavour of *x?

      • #3055967

        Language vs. Programmer vs. Tool

        by gunnar klevedal ·

        In reply to Language vs. Programmer vs. Tool

        I am glad you speak warm-heartedly of hackers.

        Many people confuse hackers with crackers.

        In my opinion hacking is a programming method. It is an alternative to Top ? Down and Bottom ? Up programming, You may or may or may not use a flow scheme. It works best for short programs. By trial and error you write some code and test it, write a little more, change a few lines, comment out something, test again. When you are hacking, you might use skeletons and dummies for procedures and functions.

        Regards

        Gunnar Klevedal

      • #3047268

        Language vs. Programmer vs. Tool

        by gunnar klevedal ·

        In reply to Language vs. Programmer vs. Tool

        Programming education

        For educational purposes I suggest a language with strong type control. You should also be forced to declare all your variables explicitly.

         The rules for scope of variables, functions and procedures should be clear. It should also be clear what happens when you pass a variable by reference or by value.

         Some languages permit you to put a lot of functionality into one line, but I suggest you be a bit more verbose. Type conversions should be out of bounds the first months. The same goes for variant types.

        OOP, Object Oriented Programming, is an option only, and I suggest you leave objects and inheritance et cetera out in the beginning

         If your code is not self-documenting, do add comments.

         Regards

        Gunnar Klevedal

    • #3050947

      this crazy life

      by apotheon ·

      In reply to bITs and blogs

      I’ve been pretty busy lately. For the last couple days, for
      instance, I’ve been packing. I’m moving, you see, about 1500 to 2000
      miles away. Last week, I was flown to a job interview in another state,
      where I was offered the job, and I accepted. I think I might have
      alluded to this previously, but I can’t be bothered to check through my
      earlier posts here in ITLOG to be sure.

      In any case, I’m packing. Fun. This’ll be the first time I’ve had
      full paid relocation for a job, military notwithstanding. It’s kind of
      an interesting experience, to say the least.

      Goodbye consulting. I don’t think I’ll miss it. I’m kinda still in denial about having to give up the Wikimedia job, though.

      (edited to answer a question in comments, sorta)

      • #3050899

        this crazy life

        by Jay Garmon ·

        In reply to this crazy life

        Does this mean you’re giving up the Wikimedia gig now that you’re leaving the greater Tampa area behind? Sad day.

    • #3065673

      Daily Linux Lessons: alias

      by apotheon ·

      In reply to bITs and blogs

      Each day, expect a new short lesson in Linux. There will surely be
      days in which I forget, or don’t have the time, or simply don’t get the
      chance to get online, but for the most part I’ll be presenting very
      brief bits of knowledge for learning how to make your way around as a
      Linux system administrator. Since any serious Linux user is,
      ultimately, a system administrator, this will probably apply to just
      about all of you.

      Today, in honor of the new tradition of the last four years of
      politicians taking time on this date to euphemize their political
      power-mongering in terms of patriotism and respect for the victims of
      the September 11th attack in 2001, I’ll present a lesson in using the
      bash alias command.

      If you find yourself typing a long command on a regular basis and
      want to reduce the number of keystrokes, you’ll probably want to alias
      the command line instruction using a more brief syntax. For instance,
      if you want to be able to use ls –color=auto, which presents the output of ls
      in a color-coded form so that it’s easy to tell at a glance the
      difference between non-executable files, executable files, and
      directories, you can type alias ls=’ls –color=auto’ one time. Hit enter, and it’s saved. From then on, when you type ls, it will react as though you typed ls –color=auto.

      Now, this only works as long as you’re signed in. Sign out, then
      sign back in, or sign in simultaneously in another console environment
      at the same time, and you may suddenly find that it’s not working any
      longer. Rather than have to recreate all your favorite aliases every
      time you log in, you can use your .bashrc file to solve that problem with alias persistence.

      First, make sure you’re in your home directory. You should be able to get there quickly, either by entering cd or by entering cd ~ at the command line. Next, you need to edit .bashrc.
      Open the file in your favorite editor (Vim should do nicely, if you
      have any inkling how to use it). It’s possible you don’t have a .bashrc
      file in your home directory already: if that’s the case, you’ll have to
      create it. Don’t think it doesn’t exist just because you don’t see it,
      though, when looking at the contents of the directory. That leading dot
      means it’s an “invisible” file. To see the invisible/hidden files in
      your home directory, simply enter ls -a.

      If you really don’t have a .bashrc file, as I said, you’ll
      have to create it. Just create a text file of that name. It doesn’t
      even necessarily have to contain any text, though being an empty file
      won’t do much for you.

      To make those aliases persistent, though, all you have to do is add them to your .bashrc
      file. You can tack them on to the very end of the thing if you like.
      The most important thing to do when adding aliases to an already
      existing .bashrc is to ensure that you don’t stick them in the
      middle of a conditional block or something like that. These are usually
      very easy to recognize, as they tend to involve indentation, and tend
      to involve a beginning line that starts with something like “if” and an
      ending line that says something like “fi”.

      Now, for the tricky part. Remember that alias command I gave you?
      Yeah. Add that line to the file. No changes necessary. All it has to
      say is alias ls=’ls –color=auto’, and any time you log in you’ll get that behavior from the ls
      command.

      There may already be such an alias in your .bashrc, though if
      you’re not getting the desired behavior, that line is probably
      commented out. The line will begin not with the letter A in “alias”,
      but with the # character before that A. All you have to do to
      “activate” that alias command is delete the leading # character. Most
      Linux distributions automatically create a .bashrc file in your
      home directory, and it typically includes a number of useful aliases
      and other shell environment tricks. The same is true of a .profile or .bash_profile
      file (which one depending on the distribution and whether you’re signed
      in as a normal user or as root). Sometimes, those useful commands are
      commented out so that you can make them active by removing the comment
      character if you want to. Carefully read any additional comments in the
      file about what each command does before uncommenting it for use: you
      may otherwise find yourself accidentally doing things you didn’t mean
      to do, as a simple command acts in complex ways because of the
      now-active alias in .bashrc. Of course, you can also deactivate an alias or other command in .bashrc by adding the # character at the beginning of the line, if you so desire.

      As you get more comfortable with the command line, you’ll
      begin to come up with stuff that you want aliased to make your job
      easier. As you start coming up with really complex aliases, you’ll
      probably want to start thinking about simply writing shell scripts for
      them rather than aliases, but that’s a lesson for another day. I’ve
      already taught quite a bit more than just the alias command, I think.

    • #3057430

      Daily Linux Lessons: fetchmail

      by apotheon ·

      In reply to bITs and blogs

      I really should have made this one “Daily Linux Lessons: keeping a
      publishing schedule”, but I clearly don’t know how to do that. I start
      a daily series on the 11th, and spend the 12th and 13th too busy to add
      updates. No, really, I mean to fix that.

      Today’s lesson is about how to completely avoid dealing with complex
      mail transfer agents (MTAs) like Sendmail and Exim when all you want is
      to get your email. This is probably more commonly done in the Linux
      world with Eric S. Raymond’s Fetchmail daemon than any other single
      piece of software.

      The first part of getting Fetchmail working for you is going to be
      on you: you have to get it installed. In Debian, that’s a very easy
      task, involving nothing more than typing “apt-get install fetchmail” at
      the shell prompt. Other distributions of Linux have similar methods for
      installing it, such as Fedora’s and Yellow Dog’s YUM utility,
      Mandriva’s urpmi, and so on. If you can’t, or won’t, install it in that
      manner, you may have to grab a tarball (a .tar.gz file) of it from HERE,
      then compile it yourself. Installing from source is quite beyond the
      scope of this little article, so as I said you’re pretty much on your
      own for installing it. Once installed, however, you need to configure
      it so that it gets your mail for you. I’ll walk you through the
      creation of a simple .fetchmailrc file, used to configure the program
      for your purposes.

      When Fetchmail is first installed, you probably won’t have a
      .fetchmailrc in your home directory and, if you do, you will have to do
      some configuration anyway. The first thing to do is open (or create)
      .fetchmailrc in your user account’s home directory (probably
      /home/[user] or ~/[user]). In general, when signed in as a
      given user, you can get to that account’s home directory simply by
      entering cd at the command line without any arguments.

      Within that file, you need to include some information so that
      fetchmail knows where to get your email and what to do with it once it
      has gotten it. You’ll want to include configuration options such as the
      following in the .fetchmailrc file, each one on a separate line:

      set postmaster [user]
      The purpose of this one is to ensure that the email actually goes to the username you’ll be using to check your email.
      set bouncemail
      This should be the
      default for Fetchmail on your system, but it’s worth being certain. If
      this is set to “no bouncemail” instead of “bouncemail”, the sender of
      an email won’t get direct notifications that mail has bounced. Instead,
      it will go to the postmaster. Since you’re setting yourself as the
      postmaster, this shouldn’t be a problem, but again, it’s worth being
      sure.
      set no spambounce
      This is to
      prevent bouncing of spam. You usually don’t want spammers to get
      bounces because that just indicates that there’s someone listening at
      the other end, and they might send more.
      set properties ""
      This is actually
      ignored by Fetchmail, as far as I can tell, but separate extensions to
      Fetchmail might make use of it. Give it an empty string by setting it
      to “” (two double-quotes next to each other with no intervening space)
      to ensure this field is treated as blank unless you have some specific
      reason to do otherwise.
      set daemon [n]
      In this, replace the
      [n] with a number of seconds between mail checks you want it to
      perform. Setting the daemon option tells fetchmail to run as a
      background process that polls your email server on a regularly
      scheduled basis for new email. If you’d rather just manually run the
      fetchmail command every time you want to see if you have mail, leave
      this line out of your configuration file entirely.
      poll [url] with proto POP3
      user '[login]' there with password '[pass]' is '[user]' here

      Believe it or not, that’s pretty much it. That last entry requires a little extra explanation, though.

      The “poll” entry tells Fetchmail where to go looking for email. You
      must enter the fully qualified domain name (FQDN) of the email server
      where incoming mail is stored in place of [url] in the above example.
      This should be something like “mail.domain.com” (without quotes). The
      POP3 part of that specifies the mail protocol being used, and is an
      assumption on my part, but chances are good that either you’re using
      POP3 or you’ll know what you should do instead of that.

      On a second line below the poll line, replace [login] with your
      login name on the mail server (probably not exactly the same as your
      username on the local system where you’ll be running Fetchmail).
      Replace [pass] with the password you use on the mail system to
      authenticate. Entries surrounded by single quotes must remain
      surrounded by single quotes.

      Some of you out there might be surprised to see that the
      .fetchmailrc file uses plain text password storage. Clearly, this is
      not the most secure method of storing information in existence, but as
      long as you’re on a trusted system where you maintain strict access
      control at all times, you should be okay. If you’re not, you should
      look into other options for getting email, such as using a mail user
      agent (MUA) that does its own email retrieval.

      For those who use command-line email clients like Mutt or Mail and
      don’t want to screw around with complex mail retrieval options,
      Fetchmail is a capable little tool. It has a lot of configuration
      options at which I haven’t even hinted here, that you can read about by
      way of web searches or using the man fetchmail command at the shell prompt.

      To wrap this up, I’ll post the text of a .fetchmailrc file I use,
      edited to block out usernames and passwords with metasyntactic
      variables (foo, bar, baz) instead:

      set postmaster "foo"
      set bouncemail
      set no spambounce
      set properties ""
      set daemon 600
      poll www.apotheon.com with proto POP3
      user 'bar' there with password 'baz' is 'foo' here

      As you can see, I’ve set it to poll for new email once every ten
      minutes (600 seconds) and the username I use for both the user
      assignment at the end and the postmaster assignment at the beginning is
      the same.

      It’s my experience that many of these simple programs that can make
      your life very easy in Linux are at first difficult to learn to
      configure properly, not because they’re particularly complex to
      configure, but because they allow so much flexibility and capability
      for configuration without being very clear about what exactly is
      needed. Sometimes, what’s needed to get you started isn’t so much a
      full explanation of the utility, but simply a very brief “this is all
      you really need” tutorial. If you need more capability than that, it’s
      easy enough to find more information from manpages (I think I’ll do a
      Daily Linux Lesson on manpages, too) and from the web.

      • #3057301

        Daily Linux Lessons: fetchmail

        by jmgarvin ·

        In reply to Daily Linux Lessons: fetchmail

        Wow, thank you!  This is actually a great tutorial for anyone to
        see.  I will be sending my students who have questions on
        fetchmail to your blog.

        Thanks for saving my sanity!!!

    • #3056784

      Daily Linux Lesson: bash basics

      by apotheon ·

      In reply to bITs and blogs

      Let’s get back to basics. People need to know how the shell works. This will be a general tutorial on using the shell, from the point of view of a bash user because bash is the ubiquitous default shell of almost all Linux and similarly modern unices.

      Bash is actually just a user-executable program that creates a user interface environment. The explorer.exe program on a Windows system does effectively the same thing, but with a completely different UI: whereas explorer.exe is the basis of a GUI (graphical user interface) environment, bash creates a CLI (command-line interface) environment. Usually, when someone refers to the “shell” on a unix system, the term refers to bash or, more rarely, something like tcsh or the Bourne shell.

      The Bourne shell was created lo these many moons ago by a guy named Bourne. It was superseded in SysV systems by the korn shell (created by a guy named Korn, actually, and not the band of the same name), which pretty much did the same things plus some extras. Since then, the Bourne again shell (bash) was created, which is designed to be backward-compatible with the Bourne shell and incorporates improvements and functionality extensions both from the korn shell and from the luminiferous either, or wherever it is that new features are born.

      The Bourne shell was an executable program called “sh” typically located in the /bin directory, thus becoming /bin/sh by way of absolute filesystem path. In modern Linux systems, /bin/sh is actually a symlink (a “shortcut”, in Windows-speak) to /bin/bash, which runs the Bourne again shell. Usually, however, when running bash by way of the sh command, it triggers a special Bourne-compatible mode of bash so that you’ll get better shell script compatibility for scripts written for the original Bourne shell. For the most part, this is a wholly superfluous precaution, as a default instance of bash will tend to run Bourne shell scripts exactly as well as the original Bourne shell did.

      Bash runs in three standard modes. Two of them are interactive modes, and one is a noninteractive mode.

      The noninteractive mode is used by shell scripts. When a shell script is invoked, it runs a separate bash instance “in the background”. It does not source any user configuration files, so your .bashrc and .profile or .bash_profile config files won’t directly affect it at all. On the other hand, this noninteractive mode shell does inherit configuration from whoever invoked it. Thus, if you run a shell script explicitly from an interactive shell, that shell script will inherit all configuration options that apply to the interactive shell while it’s running. If, on the other hand, it is run by cron (a scheduled action daemon run by the system automatically) or some other system command, it does not inherit any interactive shell configuration because it was not invoked from an interactive shell. Thus, a cron job executing a shell script that expects ~/bin to be in its path will probably have a rude awakening when it cannot find whatever file it expects to be in its list of default paths. Furthermore, interactive shell command aliases go right out the window.

      Of the two types of interactive shells you can run with bash, one is called a “login shell”, and one is not. The difference between them is in what file they source for configuration, and in whether they automatically kick you into the home directory of whatever username you’re using. When a login shell starts, your working directory (the directory you’re currently “in”) is changed to the home directory for your current username. Thus, on most systems, if you open a login shell as user “bob”, you will find that you’re suddenly currently “located” in /home/bob.

      The login shell sources a profile file in your home directory. This file is most often called .bash_profile, but sometimes you might run across one called .profile or .bash_login instead. On Debian systems, for instance, the default seems to be .bash_profile for normal users and .profile for the root user’s home directory. I’ll refer to .profile here, because it’s less letters to type. In any case, it’s usually a good idea to source .bashrc in .profile so that when you log in, you get all your shell preferences in one place.

      Keep in mind that when configuration files for bash are being sourced, the configuration options are applied in the order in which they appear, and they override any conflicting configuration that came before them. This means that if you want most of what’s in .bashrc to apply to your login shell, but want to change one thing, you can source .bashrc then override that one option with the next line in your .profile.

      Nonlogin interactive shells don’t source .profile: instead, they source .bashrc. It’s typical for the .bashrc file in your home directory to source the system bashrc file, however, which should be located somewhere in your /etc directory hierarchy. On a Debian system, for instance, /etc/bash.bashrc is the system’s bashrc file, and your /home/username/.bashrc file will likely contain a line that sources bash.bashrc so that system-wide bash configuration will apply to your interactive shells. Again, as with .profile, bash configuration is executed in the order in which it appears in the configuration file. As such, anything that comes after the line that sources bash.bashrc will override any conflicting lines in that file.

      If you make changes to your bash configuration files, keep in mind that configuration options are normally only sourced when the shell is first started. Furthermore, any shells that are started from within an already running shell, such as noninteractive shells started when you run a shell script, inherit the configuration options of the parent shell, which means that you can change configuration files and not see the changes take effect, even if you start a fresh shell from within an already active login shell. You would normally have to actually log out and log back in to restart the shell after changing configuration. There is a way to force a re-sourcing of .profile and/or .bashrc, however. To do this, use either the . or source command. For instance, if you want to source your .profile, you might do the following:

      $ . ~/.profile

      or

      $ source ~/.profile

      Finally, you can specifically launch a login shell at any time from within a nonlogin shell by using the su command. Normally, su starts a nonlogin shell, whether you’re using su without arguments to gain root privileges or with a username to start a shell with that username’s permissions. In either case, what you normally get is a nonlogin interactive shell. To make it a login shell, just make your first argument a dash, like so:

      $ su - [username]

      No, I’m not going to tell you more about how to set up a bash configuration file today. I’ve written enough without discussing bash syntax. You can get underway, though, by entering either man bash or help at the command line. In fact, I recommend you try both.

      • #3203001

        Daily Linux Lesson: bash basics

        by paul ·

        In reply to Daily Linux Lesson: bash basics

        Just wanted to thank the author for the above it was very useful to me.

        As a DBA I dont do much with the OS any more we always have an IT admin guy these days, however our admin guy was out (wife had baby) and I had to figure out which profile gets sourced by what and when. The above told me enough to figure out what I needed to to for non-interactive logins that were not operating.

        Again thanks.
        Sincerely
         Paul

    • #3059051

      Linux Lesson: quitting

      by apotheon ·

      In reply to bITs and blogs

      Clearly, “daily” was too ambitious. These will be, simply, “Linux Lessons”, starting today. That’s not to say they’ll come any more slowly than they already have. They just won’t be appearing any more quickly — like, say, daily — either. So: without further ado, here’s the next lesson, about quitting.

      One of the most important things to know about using Linux utilities is quitting — how to make them stop. Once you get started with something, you need to know how to get stopped with it as well, so you can try to undo damage you’ve done, move on to something else, or just finish what you started. With some utilities and applications in the unix world, it’s not always immediately obvious to the uninitiated how one goes about quitting.

      In general, you can quit any and all command-line programs by using Ctrl+C. Just hold down the Control key and press C, and you’ll abort most programs going on in the foreground (interacting with the user in some way, even if only by dumping output to the shell, also known as STDOUT or “standard output”). This is sort of a violent killing of the program, cutting it off in mid-process.

      If you just want to put something on “pause”, out of your way, that is currently in the foreground, you can suspend it. This is done with Ctrl+Z. If, after suspending it, you decide you want to end it completely, you can use the kill command. To do this, first check the job number of the suspended process with the jobs command. Starting a program, suspending it, checking its job number, and killing it might look something like this:

      $ programname
      ^Z
      [1]+  Stopped            programname
      $ jobs
      [1]+  Stopped            programname
      $ kill %1
      
      [1]+  Stopped            programname
      $ return
      [1]+  Terminated         programname

      In all of that, of course, the parts you’ve entered at the shell prompt follow the $, which represents a non-root user’s shell prompt. The ^Z shows where you used Ctrl+Z (the ^ is the control character identifier).

      Of course, these are particularly ungraceful ways to get out of various programs. This is the sort of thing you do if you’ve made a mistake somewhere, or something goes wrong. Most of the time, you’ll want to use a program’s own methods for ending it less abruptly.

      With many programs, the Q key does that job. For instance, with pagers such as less and more, merely using the Q key will close them. The same is true of the mutt email client, for instance. Even the vim editor does something similar, though you must be in Command mode in vim, and it’s not the Q key alone that does it.

      With vim, in command mode, you type :q and hit Enter to exit the program. If you’re not in Command mode, or aren’t sure whether you are, start pressing the Esc key until it complains at you, most likely by beeping the system speaker. You’ll type :q, and it will appear in the status line at the bottom of the vim editor screen.

      It’s possible that vim will not want to let you exit so easily. It might tell you that the file has not been saved since the last change was made. There are two ways to handle that. One is to save and exit, and the other is to force an exit without saving. To save and exit, either hold down the Shift key and press Z twice (enter ZZ), or enter :wq (and when I say “enter” something, I mean type it and press the Enter key). You can use :w alone to save changes without quitting, of course. On the other hand, if you want to quit without saving the changes you’ve made, you can simply enter :q! instead of :q or :wq. The exclamation point essentially tells vim “No, really, I mean it!” in this case.

      There are a few programs that require an EOF (End Of File) instruction to close. One of these is the write command, which opens an interactive communication program that allows you to exchange messages with another user on the same machine. This isn’t a lesson in write, of course, so you’ll have to sort out how useful that is to you on your own, but the EOF command for most programs that require them is Ctrl+D.

      You can always leave a shell instance by entering “exit” at the shell prompt. If you’re using a login shell, this will log you out of that shell. If you opened this shell instance by opening a terminal emulator, such as XTerm or Konsole, this will close that terminal emulator (or at least that tab, if you’re using a tabbed terminal emulator).

      If you want to stop a service from running that is handled through initd, you can have a look at what it handles by typing ls /etc/init.d at the command prompt. You’ll get a look at the contents of the init.d directory, and you can stop any of them by entering that service name with a “stop” argument. For instance, if your current working directory is /etc/init.d, you can stop SSH by entering ./ssh stop (which will, as it appears, stop the SSH server). If that’s not your current working directory, you may have to use the absolute path, such as by entering /etc/init.d/ssh stop, instead of using the relative path ./ssh. Keep in mind that not everything in init.d is a service, exactly: some of that serves other purposes. It might be a bad idea, for instance, to enter ./halt stop. If you want to start one of those services when it’s stopped, of course, you can use “start” instead of “stop”, and you can probably do both quickly with “restart” (depending on your system’s configuration).

      Some programs have more quirky ways to stop them — methods other programs probably won’t use. For instance, if you’re using the screen utility, you can detach from it by typing Ctrl+A followed by pressing the D key. This leaves the screen session running in the background, and you can reattach to it by entering screen -x at the shell prompt. To actually entirely end the current window of screen, you’d use the K key instead of the D key, and if you want to kill the entire screen program, you’d use Ctrl+\ instead of a single key, so that what you’d use would actually be, in total, Ctrl+A Ctrl+\.

      You might have occasion to want to abruptly kill off your X session (the GUI environment). Depending on how you have your system set up, X may automatically “respawn”, starting up again the moment it has been killed, which is usually what people want with desktop systems when they end their X sessions: they want X to be ready to use again. Sometimes, however, it also might just drop you to the command line and leave it to you to start again if you so desire (probably by entering startx). Aside from the usual ways of getting out of your X session (such as using your main window manager menu to find an “exit” or “logout” option), you can also try Ctrl+Alt+Backspace. On some systems, the traditional three-finger salute — Ctrl+Alt+Del — might also do the same thing, but on many systems it actually triggers a system reboot instead, so you’re safer trying Ctrl+Alt+Backspace if all you want to do is stop the X session.

      As for Linux itself, I’ve already mentioned the three-finger salute that is so familiar to most Windows users. Some window managers and display managers allow for additional means of stopping or restarting your system, by way of menus and the like, of course. At the command line, though, there are a few common methods for stopping or restarting the system that will work pretty much everywhere.

      First off, there’s the shutdown command. This requires two arguments to use: one is an indicator of either a reboot or a system halt, and the other tells it when to perform this task. The first argument you should use will be either -r or -h, generally. The -r will cause it to reboot, and the -h will cause it to simply shut down and stay shut down until you start it again yourself. Depending on what system configuration you have, you may need to turn off the power yourself after the OS has halted, or it may simply use soft power controls to turn the computer off automatically. The second argument is the time hack for shutdown, measured in minutes. All you have to do is enter a plus sign and a number here, or use the word “now” if you don’t want it to wait (now is the same as entering +0). You can also enter an “absolute time” in the format hh:mm, whereupon the system will shut itself down at that time. The shutdown command is designed to send imminent shutdown warnings to any logged-in users, letting them know when the system will be going down so they can save their work and close any open programs. Of course, shutdown is a more complex utility than this, but this is the important stuff. For instance, the -h or -r isn’t strictly necessary, but leaving that argument out causes the system to reboot into an administrative mode with very few of the system’s normaly capabilities activated (perhaps similar in concept to “safe mode” for Windows).

      Using shutdown -r actually just ends up sending the system the init 6 command, which tells the system to enter runlevel 6 — a reboot runlevel, which shuts the OS down and reboots it in your default runlevel again (usually 3 or 5, depending on system configuration). Likewise, shutdown -h ends up sending the system the init 0 command, which is system halt. Thus, if you’re going to just use “now” or “+0” for your time argument in the shutdown command, you could just as easily get the same results by entering init 0 instead of shutdown -h or init 6 instead of shutdown -r.

      Finally, there are the commands halt and reboot, which actually call the shutdown -h or shutdown -r instructions, respectively, as though they’d been invoked with +0 as the time argument.

      That pretty much sums up the highlights of quitting with Linux.

    • #3062323

      Another day, another serving of FUD.

      by apotheon ·

      In reply to bITs and blogs

      I currently have two Linux Lessons half-written, but I lost the mood
      on both of ’em before finishing them. I’ll finish one of them up in the
      next day or two and post it, then get to work on the other. I think the
      one about automatic indentation in vim will probably be the first to
      get posted.

      I also have at least three different Windows->Linux articles in
      progress, but they’re each at a point where I’ll need to do some
      thinking before I continue. One is about choosing distributions for the
      tasks at hand, one is about the important skills of a system
      administrator and his role in the migration, and one is an experimental
      piece about the Fedora distribution in particular (though I’m not sure
      that’ll ultimately end up being part of the Windows->Linux series:
      if it is, I’ll have to do writeups on Debian, Slackware, Mandriva,
      SuSE, Gentoo, and Ubuntu, at the very least, which could prove
      difficult to put it mildly).

      My Understanding OSes series is in desperate need of planning. It’s
      something that’ll need more structure than what I could give it if I
      just started writing something, so I’ll have to make sure I’ve got some
      plan for the next half-dozen or so articles before I start writing the
      very next article in the series.

      I’ve got some odd meditations about programming language design and
      the necessities that go into quality instruction in programming, mostly
      from a code-hacker perspective (as opposed to a corporate “software
      engineer” perspective), that may shape up into something. One of these
      things is already a half-written article, though it reads like the
      first installment of another series so far. We’ll see where that goes.

      . . . and here, you all thought I wasn’t writing anything.

      I’ve been thinking about all this brouhaha over Symantec’s
      announcement that the native browser on the only OS platform that’ll
      run its products is “more secure”, judging by frequency of reported
      vulnerabilities to which the browser “vendor” will admit. I was
      planning on working every flaw in the Symantec analysis into that
      sentence, but after just three, I realized that there was no way it
      would be a readable sentence by the time I was done with the other
      dozen.

      Appropriately enough, the president of the company that employs me
      called me down to his office to help him with a problem on his laptop.
      He has two of about six systems in the whole company that have Windows
      on them and actually do anything useful (the seventh being a test
      system), and he called me because after installing the connection
      software for his cellular modem service on the laptop Outlook started
      failing to be able to retrieve incoming email. Wanna take a guess at
      the problem? I’ll give you a hint: I’ve already mentioned the problem
      in this post.

      Yeah, that’s right. It was a Symantec product flipping out. The
      Norton AntiVirus email scanner got turned off and back on, and the
      problem magically cleared up. I swear, any company that can’t get its
      so-called security software to work properly and consistently doesn’t
      have a hell of a lot of room to talk about the security of anyone
      else’s software. Until they manage to make an antivirus application
      that doesn’t itself get compromised by the Flavor Of The Week virus,
      they aren’t getting much attention from me unless it’s to point and
      laugh (or troubleshoot their issues when I have to).

      I look forward to the day when I get the go-ahead to roll out a
      centralized enterprise antivirus solution for our network, and no
      longer have to deal with Symantec’s NAV. Now, if only I could escape
      Symantec “expert analysis” in the news I read, I’d be set.

    • #3063323

      Open source software is doomed!

      by apotheon ·

      In reply to bITs and blogs

      Okay, not really. I do keep hearing that, though: open source software is doomed. It’s doomed because Microsoft will crush it. It’s doomed because it’s not profitable. It’s doomed because people want stuff with corporate support. It’s doomed because it’s easy to use.

      Bad news, folks: you’re wrong. Let’s take the example of database software. What are the major enterprise-class database management systems? You’ve got Oracle, Informix, blah, blah, about half a dozen or so, and PostreSQL.

      Oh, wait, what about MS SQL Server? More bad news: nobody running a database in a serious, high-load enterprise implementation would use MS SQL Server. I don’t even include MySQL in the “enterprise-class” DBMSes, and it’s still good enough to run LiveJournal and the Wikimedia Foundation (the latter being solidly in the top 100 websites worldwide for traffic, beating out even the New York Times).

      PostreSQL is far from dying on the vine: it’s growing in market share. People like it. No, they love it. Better yet, it’s free. Businesses love it. Far from suffering from the problems cited by doomsayers who claim that there’s no corporate backing, no support, and no profit in open source software, PostreSQL is so hot people are throwing money at it because they want it to keep advancing so they can continue using it. As PostgreSQL core team member Josh Berkus put it, “We pride ourselves on being business-friendly, and enjoy support and resource contributions from a coalition of companies because of it: at least half of our major contributors are paid to work on PostgreSQL by companies who use it.” Got that? At least half the major PostgreSQL contributing developers are paid to do the work by companies who use PostgreSQL. So much for the arguments from the peanut gallery.

      Yeah, yeah, I know: I was going to post more Linux Lessons. I’m busy. Really busy. I just haven’t had time to write anything that well thought-out. Sorry.

      • #3062610

        Open source software is doomed!

        by denwasson ·

        In reply to Open source software is doomed!

        I’m just starting to tinker with PostgreSQL (read – bought a few books and trying it out) and love it.

      • #3061659

        Open source software is doomed!

        by anacompsolutions ·

        In reply to Open source software is doomed!

        I agree with you that open source is FAR from “doomed”, but you lost any and all credibility you gained with me when you write “More bad news: nobody running a database in a serious, high-load enterprise implementation would use MS SQL Server.”

        Ha!  Ridiculous. And untrue, by the way.

        As a professional DBA and developer, I have long experience showing EXACTLY the opposite is true.

      • #3073909

        Open source software is doomed!

        by jack ·

        In reply to Open source software is doomed!

        Open source easy to use?!!!  I like that one.  I’ve been trying to two months to get Apache / Tomcat upgraded on a Unix server.  Apache compiles with a compiler from the hardware vendor, but not with the GNU compiler.  Tomcat connector won’t compile with any compiler.  Solution? – ignore the compile errors – find that in the man pages.  Now run “make install” – sorry, only gmake can do that.  Whazzat? I need to compile gmake?  Oh. 

      • #3136701

        Open source software is doomed!

        by ke_xtian ·

        In reply to Open source software is doomed!

        Yeah MS SQL Server will run high performance apps, but only if you are willing to scale out dramatically.

        What it is is a kludge, and I am speaking from real-world experience as well.

      • #3136696

        Open source software is doomed!

        by jdhannah ·

        In reply to Open source software is doomed!

        I must agree that you are badly misinformed about SQL Server’s
        capabilities.  It is most certainly capable of enterprise-level
        applications and I also speak from experience.  What information
        do you base your opinion on?  SQL Server 2005 is even more robust
        and I think three years down the road we will see that it has increased
        MS share of the enterprise db market.

        And I’m not anti-open source by any means.  But open source
        products, in general, are NOT easy to use compared to commercial
        rivals.  I think open source is by its nature decentralized to the
        point that ease-of-use is one of the first casualties.  It can be
        a huge pain to do even simple things as was also described above. 
        There are, of course, exceptions.  I have found MySQL to be very
        easy to use and as was pointed out, it is used in some very heavy use
        environments.  Open source will survive and prosper and help keep
        company’s like MS on their toes, but they will never be dominant. All
        the various Linux installs (which have morphed into essentially
        different OSs based on a common core), the driver issues, etc. are too
        much of a pain and kill you on TCO.  Plus the revenue models of
        open source companies don’t allow for the real R&D expenditures
        necessary to compete with commercial products.  Just my opinion…

      • #3137509

        Open source software is doomed!

        by apotheon ·

        In reply to Open source software is doomed!

        brief note: I direct the attention of those who read this blog post and immediately think “But I use MS SQL Server in a ‘serious, high-load enterprise implementation!'” to some follow-up commentary in the second half of my next blog post, entitled “failing to understand how it works”. The second half of that post bears on this one, and the misunderstandings inherent in the sort of contradictions I’ve received. Ultimately, I think people who make a career of supporting MS products (a challenging career that demands a certain amount of professionalism to perform well, to be sure) end up with a very limited perspective in terms of what constitutes certain economies of scale. I have never, ever, in my entire life, been told “I work with MS SQL Server in this situation,” where “this situation” compared to the loads I’ve seen handled well by some of the other DMBSes I’ve mentioned. MS SQL Server really seems to push its limits at about the level where Oracle (for instance) begins to make economic sense. Something like PostreSQL significantly overlaps both scales. Go read the other blog post before posting about how deluded I am. Thanks.

      • #3207083

        Open source software is doomed!

        by ecacofonix1 ·

        In reply to Open source software is doomed!

         

         

        Making revenues from free & open source software is one of the most frequently asked questions these days. While there have been a few successful examples of companies (like MySQL, Red Hat etc) which are making money, I?d surmise that these are still very early days for open source revenue & profit models.

         

        While open source as an operational paradigm certainly has been having exceptional success against proprietary and closed-software models in the recent past, in my opinion, a lot more thought need to be given and experimentations done before the emergence of viable revenue models for the free & open source models that can successfully compete with the current proprietary software revenue model. Some specifics of the business models are emerging fast, but it will take a few years for the market to test each of these out and hopefully, the fittest will survive.

         

        A site that focuses exclusively on revenue models from free, open source software is Follars.com ? Free, Open-source Dollars – http://www.follars.com !

         

        Ec @ IT, Software Database @ http://www.eit.in

    • #3072035

      failing to understand how it works

      by apotheon ·

      In reply to bITs and blogs

      Let’s have a look at a recent ZDNet article:

      Automating Linux security should be a higher priority by ZDNet‘s
      Dana Blankenhorn — For a mass market in Linux to develop, such a
      market needs protection at consumer price points. And they need to see
      a variety of vendors offering this service.

      Does anyone else see what’s wrong with that article’s assumptions? I
      know, some of you may not have read it yet. I’ll quote from it a little
      bit for you.

      Dana Blankenhorn says “I strongly believe that Linux users badly need
      the kind of automated anti-viral patch management service that Windows
      users now take for granted.” Clearly, he hasn’t been paying attention.
      I’ll break it down a bit for you.

      1. automated: Linux is as automated as you want it to be. You want
        automated patch management? Set up a cron job to grab software updates
        using your distribution’s standard package management system on a
        regular schedule. With the GUI tools available in Linux these days, that can even be a point and click operation. That’s not too tough, is it?
      2. anti-viral: The most commonly cited reason for Linux security with
        regard to viral activity is its privilege separation. Because of that
        simple architectural characteristic, very little of a Linux system is
        at risk from infection. This only addresses the “zero hour” issue,
        however. It doesn’t make Linux invulnerable, just less vulnerable; a
        virus doesn’t do no damage because of this, just extremely limited
        damage. The real protection against mobile malicious code is actually
        in open source software development. Popular open source software is
        well protected because vulnerability patches and new, secured software
        versions are produced at the source (if you’ll excuse the pun),
        tweaking the code of the previously vulnerable software itself so that
        the entire vulnerability ceases to exist. This is as opposed to the way
        virus activity is addressed on the Windows platform, where Microsoft
        just shrugs its shoulders and figures third-party antivirus software
        vendors will cover it with a virus definition meant to defend against
        the latest specific version of a given virus that exploits a particular
        vulnerability. Because the core vulnerability remains, new exploiting
        code can be cranked out easily, simply by adjusting the old virus so it
        doesn’t match the old virus definition the AV software uses to
        “protect” you. Virus patching is a solved issue for major open source
        projects.
      3. Windows users: Ahh, here’s the likely problem. I’m going to guess
        he’s still used to using Windows more than Linux, if he’s used Linux at
        all. Windows has a certain way of doing things, and that way of doing
        things produces certain circumstances that necessarily perpetuate the
        Windows way of doing things. It seems like Blankenhorn is making the
        classic mistake of assuming that with which he is familiar is
        universally applicable.

      He further goes on to say “I know there are many Linux experts who
      say that, because Linux doesn’t have ActiveX controls, viruses aren’t a
      problem.” He’s obviously reading the wrong discussions of Linux
      security, because ActiveX has almost nothing to do with virus issues.
      ActiveX is the source of problems like the proliferation over the Web of Windows
      trojans, adware, and spyware. It also provides some convenient
      vulnerabilities for privilege escalation in the process of “rooting” a
      Windows box. Because of the lack of ActiveX, Linux is less vulnerable
      against such forms of system security compromise, but actual viral
      activity is quite separate from that. ActiveX allows arbitrary remote
      code execution, while the problem with a virus is privileged local code
      execution. It would help immensely if Blankenhorn actually knew what he
      was talking about, I suppose. I guess his confusion might have been
      with the fact that ActiveX controls have been known to deliver a
      virus to your machine, but that’s just a matter of convenient
      communication, and not of actual vulnerability to the virus payload. Even if
      there was something like ActiveX delivering virus payloads to your
      Linux machine, once it got there, it would sit there, dormant. You’d
      still have to run the virus yourself (or have something configured to
      run it automatically for you), and it would still only run with
      privileges associated with the user account that initiated the process.

      He states that antivirus is a service (not software), that we can
      get there from here by creating a centralized standard. He fails to
      realize we’re already there, both as a service and as
      software. On one hand, we’ve got vulnerabilities getting plugged “once
      and for all” so that an approach to virus-writing that works today (and
      still won’t do much damage unless you pretty much set out to destroy
      your own system) won’t work tomorrow. That’s the nature of the beast on
      Linux systems: too bad for the script-kiddies and virus writers, life
      is tough. On the other hand, we’ve got excellent antivirus software,
      like ClamAV, which is really useful to protect Windows machines from
      virus code that might otherwise be passed on to them by a Linux machine
      that doesn’t even know (or care) that it’s there. Antivirus software
      (even free AV software with open source code, like ClamAV) exists and
      thrives on Linux systems, but it’s pretty much only for mail servers
      because it’s a waste of system resources to use it to “protect” a
      desktop Linux system that isn’t even vulnerable.

      It’s interesting that Blankenhorn has become so indoctrinated in the
      Windows end-user buzzword culture that he thinks all system security
      boils down to virus protection. If you really press him, he might also
      list adware and spyware, and may even use the word “trojan”, in a
      discussion of system security. If you asked him what these things were,
      though, he’d almost certainly tell you they’re types of virus. Of
      course, he’d be wrong.

      Maybe that’s why he thinks Linux needs a makeover, needs a software
      monoculture with a single point of failure, needs all its attack
      vectors collected in one, convenient place so the “bad guys” know where
      to point their malware. He talks about “unifying what is happening in
      all major distributions” as if that would in some way simplify the
      process of securing your system at home.

      Bad news for your “analysis”, Blankenhorn: You’re applying Windows
      solutions to a Linux non-problem. It is not only not necessary, but
      also potentially harmful. I guess you won’t be happy until you can buy
      a shrink-wrapped yellow box with the word “Norton” on the front of it
      that gives you a false sense of security. I, personally, opt for the
      real thing, and it doesn’t come in a logo-printed box.


      In other news, a reader commented in an earlier post to this blog
      about a statement I made regarding MS SQL Server. The comment was this:
      “you lost any and all credibility you gained with me when you write
      ‘More bad news: nobody running a database in a serious, high-load
      enterprise implementation would use MS SQL Server.'”

      I think, again, there’s a failure to understand here. The reader
      claims this statement is ridiculous, untrue, and contraindicated by
      his/her own experience as a professional DBA and developer. Let’s
      examine the statement in discrete parts.

      serious: I’m talking about people who know what they’re doing and
      have normal, practical, real-world applications in mind. This rules out
      those who simply buy the MS party line and try to do everything with
      Microsoft “solutions” without even examining alternatives. This also
      rules out people implementing various DBMSes only to study their
      performance, those who just pick up whatever they have on hand, those
      using one particular DBMS just because that’s what they’re being told
      to use by their college professors, those who pick their DBMS based on
      marketing concerns rather than actual implementation suitability, and
      so on.

      high-load: Keep in mind that before my current job I was the
      database technician for the Wikimedia Foundation. The Wikimedia
      Foundation’s servers support a top-100 website, measured in terms of
      traffic. This traffic consists of absolutely absurd amounts of database
      access for both read and write operations. Sure, there’s a lot of
      caching going on to increase end-user performance that would otherwise
      suffer from the simple fact that a relational database is only so fast,
      but this is only effective on the read side of things, and it doesn’t
      eliminate the load on the database servers: on the write side (pun
      intended), there is pretty much nothing to be done about reducing
      server load. Because of the nature of the websites existing under the
      umbrella of the Wikimedia Foundation (such as Wikipedia, Wikinews, and
      so on), these servers also tend to suffer far higher database write
      loads than similarly popular websites. Now, your DBMS implementation
      doesn’t have to live up to Wikimedia standards of “high load” to
      qualify as a high-load implementation for purposes of my statement, but
      a really busy accounting department using MS SQL Server for payroll
      sure as heck doesn’t count. I’m talking about “high load” in a
      generally absolute sense, not in a sense measured by the relative
      capabilities of the DBMS. If it’s a “high load” for MS SQL Server, that
      doesn’t mean it’s similarly a “high load” for, say, an Oracle database.

      enterprise: I don’t care what stickers and slogans Microsoft thinks
      should be attached to the MS SQL Server shrinkwrap, if it’s not serving
      at least a thousand different access vectors, it’s not “enterprise”
      level by my standards.

      If, after all that, you still know people trying to use MS SQL
      Server to suit these needs, I recommend you educate them about their
      mistakes before they lose their jobs for failing to keep things in
      steady working order.

      NOTE: I’ve made a slight edit or two in response to comments this post has received, and it may gain some edits, because some people simply don’t have the information needed to understand that their criticisms have already been addressed by the open source software development community at large.

      • #3071515

        failing to understand how it works

        by jmgarvin ·

        In reply to failing to understand how it works

        Spot on!  My biggest beef with the whole article is that he doesn’t have a clue about Linux.  Has he not heard of:

        A) SELinux – For those not in the know SELinux (to a strong extent) is kernel based security.  It focuses on role-based access control and multi-level security starting at the kernel level.  This is HUGE for security.  Sure, it isn’t perfect, but it is pretty young and creates a more robust security vision than the MS Patch and Pray ideal.

        B) Stateless Linux – This is just cool.  Imagine you have a master server that all clients replicate off of.  The idea of stateless Linux is Active Directory on steroids!  So, if the master is ever updated or changed, all the clients synch with that change and update AUTOMATICALLY.

        C)  Xen – Allows you to virtualize your environment so you can test and “trap” what will happen.  This ideal is starting to be applied to security as well.

        Oh, let me plug ClamAV….

        *sigh* Why is there so much FUD?

         

      • #3071453

        failing to understand how it works

        by jmgarvin ·

        In reply to failing to understand how it works

        Spot on!  My biggest beef with the whole article is that he doesn’t have a clue about Linux.  Has he not heard of:

        A) SELinux – For those not in the know SELinux (to a strong extent) is kernel based security.  It focuses on role-based access control and multi-level security starting at the kernel level.  This is HUGE for security.  Sure, it isn’t perfect, but it is pretty young and creates a more robust security vision than the MS Patch and Pray ideal.

        B) Stateless Linux – This is just cool.  Imagine you have a master server that all clients replicate off of.  The idea of stateless Linux is Active Directory on steroids!  So, if the master is ever updated or changed, all the clients synch with that change and update AUTOMATICALLY.

        C)  Xen – Allows you to virtualize your environment so you can test and “trap” what will happen.  This ideal is starting to be applied to security as well.

        Oh, let me plug ClamAV….

        *sigh* Why is there so much FUD?

         

      • #3071337

        failing to understand how it works

        by stubby ·

        In reply to failing to understand how it works

        I haven’t read the ZDNet article but just a point or so from yours ….. “Set up a cron job” would be akin to writing a batch file in windows and guess what – yer average Joe User can’t and doesn’t much care about doing so either. If they can’t tick a few boxes or click an ‘update now’ then they are not interested.

        “privilege separation” – again a good thing in the *nix world but not what most average Joe User want. They want to log in (if they do that at all) and be able to do everything without thinking (yes I know ….) and that includes installing software without having to change users.

        Your blog is well written and stayed away from the usual *nix v windows fervour, but I think fails to understand what the majority of Windows users want as much as you claim the same for the author of the ZDNet article and their understanding of Linux.

        Just my 2p worth.

        Revive BeOS I say 🙂

      • #3057704

        failing to understand how it works

        by jdclyde ·

        In reply to failing to understand how it works

        An excellent read.  Very well thought out and explained.

        I think Stubby is his
        post highlights much of the problems that happens in the Windows
        world.  Giving the user what they WANT instead of what they NEED.

        They WANT a system that requires nothing on their part, and then
        complain to or at the techs or vendors when everything comes tumbling
        down.  Is it Dell/Gateway/IBM (fill in the blank) to blame if joe
        user picks up a virus that formats their hard drive and they loose all
        of the pictures of little joey?  Is it MicroSofts fault for making
        a product the way joe user WANTS it?

        I recently had to reload a system for a friend of my mothers.  Had
        she been doing “Windows Updates”?  No, because she thought that
        would update her to a newer version of Windows and she liked this
        version.  (sigh)

        I think the end user is gettting exactly what they WANT, and end up with what they deserve.

      • #3057613

        failing to understand how it works

        by rah3000 ·

        In reply to failing to understand how it works

        I have to agree with your response to Blankenhorn?s article, even though I work in a Windows shop. It is obvious to me his research on this subject was limited if not non-existent. I would suggest your comment ?It seems like Blankenhorn is making the classic mistake of assuming that with which he is familiar is universally applicable.? is valid.

         

        Un-biased analysis requires a degree of critical thinking, which is why I think most follow the ?logo? and tend to preach from that podium. Hopefully your response to Blankenhorn will convince some of the ?believers? the world really isn?t flat.

      • #3060253

        failing to understand how it works

        by lastchip ·

        In reply to failing to understand how it works

        One has to ask, where is the editorial control?

        I too, have not seen this article, but have no reason to disbelieve
        apotheon’s review, which seems accurate, balanced and well articulated.

        If it really is that blatantly wrong, why was it published in the first
        place? Or was it simply a badly written piece that failed to get it’s
        point across accurately?

      • #3070591

        failing to understand how it works

        by stephen howard-sarin ·

        In reply to failing to understand how it works

        I slipped a note to Dana, pointing out Apotheon’s blog. Dan’s responded back in his blog.

      • #3071097

        failing to understand how it works

        by jmgarvin ·

        In reply to failing to understand how it works

        I’ve posted to his blog as well.  I still think there is a disconnect, but what he is saying makes more sense now.

    • #3060179

      a new programming paradigm

      by apotheon ·

      In reply to bITs and blogs

      There are all these programming styles erupting out of mass-market
      corporate culture. Books are published and people are made into instant
      millionaires by promoting their own buzzwords. We had object oriented
      programming (OOP) for a long time: now we have Extreme Programming
      (XP), Data Oriented Programming (which actually makes some sense: “it’s
      about the data, stupid”), Design Patterns, Structured Programming,
      Imperative Programming, Rapid Release Programming, Component Oriented
      Programming, Reflective Programming, Post-Object Programming,
      Aspect-Oriented Programming (wtf?), Constraint Programing (wtf?!), and
      so on.

      Well, I’ve decided to get in on the action. I’ve invented a new
      programming paradigm of my own: Rapid Obsolescence Programming Extreme.

      I invented this new programming paradigm entirely by accident,
      actually, and I did it in about half an hour yesterday. I’ll explain.

      A coworker was dealing with a datafile too big to open with
      OpenOffice.org Calc. She tried opening it in emacs to do some
      search-and-replace voodoo on the thing to reduce the file size
      somewhat, but found that it was too big for the emacs edit buffer. She
      decided to try it in Vim, which doesn’t have that same limitation, and
      decided to replace all instances of two consecutive newlines with a
      single newline. Apparently, this file had a lot of instances of
      two consecutive newlines, so this might help. Unfortunately, as far as
      both she and I have been able to determine (and she hit me up for help
      because I’m the Vim guy ’round here), there doesn’t seem to be a way of
      inserting newlines by regex in Vim without getting wacky binary
      characters that are less than desirable. Fooey.

      So, of course, I wrote a Perl script (in Vim, hah). It just reads in
      the contents of a specified file, replaces all instances of two
      consecutive newlines with a single newline, and spits out the results
      to STDOUT (that’s the CLI environment display, or “the screen”, for you
      Windows types). Use a redirect to point the output at a filename, and
      voila, you can either dump the results into a new file or overwrite the
      old file.

      I looked at this one-time throwaway script and thought “I can make
      this better. I can make a useful search and replace utility out of
      this. I think I’ll do so.” Thus, I slapped in a “use Getopt::Std;” and
      monkeyed with the code until it took CLI arguments to allow it to
      replace anything matching one regular expression string with text
      specified by another regular expression string, then dump it to STDOUT
      (again to be redirected as needed). I finished, and it was beautiful. I
      even used a HERE doc to give it a help switch so that it’ll tell you
      how to use the utility, who wrote it, what the release license of the
      thing is, and what bugs it has.

      Yes, it has bugs. One bug, to be exact. Apparently, for some utterly
      asinine reason, Getopt::Std (and, for that matter, Getopt::Long)
      sanitizes arguments that are inserted into regexes.

      So, after expanding the functionality of the script until it’s a
      bona-fide utility, it will now do everything I wanted except the one
      task for which it was originally designed: newline substitution. In the
      course of about half an hour, I went from prototype to obsolete. Now,
      you know the story of ROPE (Rapid Obsolescence Programming Extreme).
      Use enough of it, and you can hang yourself.

      Just think: when I’m a multimillionaire, collecting royalties on my seminal work ROPE Software Development, you’ll be able to say you knew me when.

      • #3060174

        a new programming paradigm

        by wayne m. ·

        In reply to a new programming paradigm

        Suggestion: Look Again at Extreme Programming – The process above describes the situation that Extreme Programming was meant to provide.  Test First Design – Define what you are trying to do, and write a test to ensure the code does that.  Do the simplest thing that could possibly work – Don’t expand or generalize your code until there is an explicit need.  Concentrate on doing what is asked for (what you wrote the test for) – no more, no less.

        Before ridiculing methodolgies that you are not familiar with, take the time to understand them.  Most of them have some underlying rationale that deserves respect.

         

      • #3070787

        a new programming paradigm

        by davidbmoses ·

        In reply to a new programming paradigm

        I also suggest you have a look at sed and/or awk. No need to rewrite
        standard tools. Sed is ancient and should run faster than a Perl
        script. Some may say sed might have inspired Perl?s beginnings at least
        in search and replace.

      • #3070728

        a new programming paradigm

        by sbockelman ·

        In reply to a new programming paradigm

        …and someone was paying you to do this?

      • #3070626

        a new programming paradigm

        by apotheon ·

        In reply to a new programming paradigm

        Okay, I’m breaking my own rule. I’m responding to comments in this blog.

        1. If you think I was singling out Extreme Programming for ridicule, or saying that there was nothing worthwhile in the general set of principles that make up Extreme Programming, you weren’t paying much attention and probably take yourself far too seriously. Of course, it’s pretty obvious you take yourself, and Extreme Programming, too seriously if you would apply the concept of “Don’t expand or generalize your code until there is an explicit need” to this example.

        2. I know Perl. I don’t know sed and awk. It took me all of three minutes for me to cobble together the initial Perl script for the newline search and replace, and it would have taken me half an hour to sort out how to do the same thing using a language and an editor with which I’m not familiar. It would probably be to my advantage to learn sed and awk, and in fact I’ve been thinking about setting aside a weekend to get as well-versed with the use of sed as possible in the near future, but for now, I don’t have the time or need to learn either.

        3. No, nobody was paying me to do “this”, if by “this” you mean turning my three-minute script into a half-hour project. I get paid a salary for eight hours a day, am in the office for closer to nine hours every day, did some of the expanding of the original script while I was watching paint dry (think: letting backup logs scroll past and a server OS install go through its automated parts) and did some of it when I wasn’t even in the office. Blind assumptions, such as those suggested by the sardonic comment “and someone was paying you to do this?”, often just make you look dumb. I recommend against making such assumptions in the future.

        All of the above aside, Perl is fun, and all else being equal if I can use it to accomplish a task at work, I do. Note the phrase “all else being equal”, folks. Obviously, if there’s a much better way of doing something, I’ll do that instead.

        Capisce?

      • #3070556

        a new programming paradigm

        by joeaaa22 ·

        In reply to a new programming paradigm

        Three things:

        • To Wayne M. – Get a grip. XP is like every other
          programming paradigm out there; no better, no worse. It works
          within it’s specifically designed scope. And since this is a
          humorous post, just read the thing and laugh.
        • To davidmoses – Right on! Yes, perl can do
          anything but there are already existing tools that do their one thing
          and do it very well. I don’t know how I’d live without awk.
        • To apotheon – Great post. I have been doing this
          same kind of programming for years. I just never thought to put a
          name to it. Props for coming up with the acronym. Can’t
          wait till the book comes out. 😉

        All spelling errors ? Joe – 2005
        All type-o’s ? this cheap-ass keyboard – 2003, 2004, 2005
        Some rights reserved, maybe.

      • #3068630

        a new programming paradigm

        by cheval ·

        In reply to a new programming paradigm

        This fits perfectly with my research “How to do nothing with anything –
        Why computer software programming is the hardest carreer path
        possible.” Not even medical science or quantum physics has the
        diversity and possibilities as software programming. Software
        developement is the ultimate catch 22, if you specialise you get left
        behind, if you broaden out you get lost in learning.

        – Cheval

      • #3068628

        a new programming paradigm

        by christineeve ·

        In reply to a new programming paradigm

        Genius!

      • #3071013

        a new programming paradigm

        by roho ·

        In reply to a new programming paradigm

        Forget the multimillionaire. It seems like you are wasting a lot of
        time making software that will never be used. Not even once, as you had
        solved the problem before you started getting out of control.
        On top of that your lengthy post just adds more waste of time.
        And I took the time to read and comment oh my gods!

      • #3070904

        a new programming paradigm

        by innocent_bystander ·

        In reply to a new programming paradigm

        It wasn’t funny, it wasn’t informative, it wasn’t interesting. It was self-absorbed and sarcastic, which is why I don’t bother to read most of your posts. But for some unknown reason they provided a link to it.

      • #3060807

        a new programming paradigm

        by richard p ·

        In reply to a new programming paradigm

        Humorous yes! Worth the read – yes (if you know anything about software coding). Even a social comment buried in there.

        Comments – everyone to thier own opinion – which is probably why there is more than one way to do anything. Please for those who have a taller horse come down from there and stop misaligning others who don’t share your opinions.

        two qoutes

        1. experimentation is an art of learning (me) 2. friends come and go, enemies accumulate (someone else)

        OH YES – If people didn’t sit around doing stuff in thier free time we wouldn’t have any free software at all – no not one.

      • #3060801

        a new programming paradigm

        by innocent_bystander ·

        In reply to a new programming paradigm

        I agree on perl, but…

        Several years ago I suddenly had to support (meaning modify as soon as possible) a bunch of cgi scripts. I knew nohting of perl so I went out and bought a bunch of the highest-rated references and tutorials.

        I enjoyed learning the language and modifying the scripts in spite of the awkwardness of posting, setting permissions, and testing, without bringing down the site. But when I decided I wanted to know more, I went to a Perl newsgroup moderated by a couple of the big names that had authored the references I had purchased previously. Before I posted anything, there was an introductory warning about not asking trivial questions, then a laundry list of sites to check first, then an admonishment to check every other available source before bothering the important people on this site!!!  The threat was that you would be flooded with flaming emails if you put a step wrong. I gave up wanting to learn perl if these were the people I would be “networking” with.

        Since then, I’ve been told by others, rightly or wrongly,  that this is typical of Unix Geeks – (I’m a Microsoft Geek – but I don’t want to get into the tech wars – MS rescued be from COBOL and put multi-tasking on the desktop for me, so I’m not going to bite the hand that feeds me).

        Anyhoo, if you want to convert people to what I’ve come to call the “dark side”, try encouraging a little less hostility towards newcomers..

         

         

         

         

         

      • #3118367

        a new programming paradigm

        by illilli ·

        In reply to a new programming paradigm

        How in the world do you go from posting a humorous programming parody to being accused of converting people to the dark side? I must laugh. Also, what is with everyone adding a “yeah, but…” statement to everything they read? How about adding some useful comments like I am about to do?

        Useful comment #1 – I have invented ROPE WAY BEFORE YOU! I was reinventing the ROPE in fact when you were the same age as me and writing complex algorithms in a REAL progamming language while I played around with SED/AWK.

        Useful comment #2 – SED/AWK are precursors to Perl.  If you know perl, you pretty much know SED/AWK minus a few syntax differences.

        Useful comment #3 – Why would you use SED/AWK when you can use Perl? Maybe there is a good reason, so please create your own blog with it so I can find it and read it.

        Useful comment #4 – Avoid eating large quantity of prunes the night before you are supposed to get married.

        There, I hope I have added value to this very good blog. 

    • #3068821

      A-Fisking We Will Go

      by apotheon ·

      In reply to bITs and blogs

      Amazing. People read this thing. People even tell others about it. In particular, my earlier ramble entitled “failing to understand how it works” was pointed out to Dana Blankenhorn. This is significant because Dana Blankenhorn, a ZDNet blogger with (I’m sure) rather more readers than this ITLOG has, wrote a piece on the subject of consumer-level Linux system security needs, and the first half of “failing to understand how it works” was all about his Linux security analysis.

      Mr. Blankenhorn then proceeded to reference it in a new blog post, entitled “I get a right good fisking“. For those who don’t know, fisking is the act of critiquing an argument or presentation in excruciating detail to challenge its conclusions. In retrospect, I guess that is indeed what I did. I seem to do a lot of that. Frankly, fiskings are easy to write.

      Maybe I should have called this fisklog.

      In any case, I have a couple things to say in response to Mr. Blankenhorn’s response to my critique:

      He states that he wasn’t writing at a technical level: he was writing from a business perspective. This sort of attitude toward technology analysis is part of the problem we have with security in the IT industry, I’m afraid. As long as we write off vast gaps in our knowledge, wild misconceptions, and blatant factual errors as a “technical” matter that doesn’t apply to our “business” analysis, we will continue to have all the problems we currently have, and will in fact continue to be quite productive as creators of yet more problems. While I tend to agree with Blankenhorn’s more recent assertion that making the end user feel protected, and feel like it’s not hard work to stay that way, is a necessary part of attracting a strong user base, that’s not really what he said in the earlier blog post, even if it’s what he meant. What he said was that Linux needed antivirus. As long as people keep prescribing the wrong medicine for what ails our end users, the consumer base will continue to suffer all the indignities it currently suffers, and yet more will be heaped upon them. If we’re going to recommend a business-oriented solution to a technical problem, we need to ensure that the solution we recommend actually addresses that problem. First and foremost, that means we have to be able to accurately identify the problem, too.

      He brings up the popular debate about comparative security of Windows versus Linux systems. He’s right, that there isn’t an easy answer to whether Windows is inherently less secure than Linux, but not for the reasons he seems to think. There’s plenty of data available for determining the technical security capabilities inherent in the OS architecture of both Linux and Windows. What raises the problem of uncertainty is not whether or not Linux systems will be exploited more often than Windows system if and when it dominates the market the way Windows does now. For one thing, that’s only one variable, and a lot of others will have changed about in the mean time. What creates the ambiguity is our definitions of “secure”, and what operating conditions we want to analyze for comparative system security. Under almost all circumstances, by almost any standards with any real meaning, Linux is a clear winner: it doesn’t involve the ease — nay, the invitation — of privilege escalation Windows does, it has industrial strength Enterprise-class integrated firewall capability, it’s almost difficult to have unneeded ports open on modern distribution default installs, privilege separation is essentially complete, there’s no common mechanism for automatic execution of untrusted code, mobile code vulnerabilities get fixed rather than covered up one exploit at a time by virus definitions, and so on. The list of technical security advantages seems almost endless. There are definitely cases where security is effectively equal between the two, with perhaps a slight advantage one way or the other, however. One of these cases involves a locked room with no network connection. Another involves unplugging the computers and dropping them into the ocean. What Mr. Blankenhorn means about there not being enough data for a decent analysis of comparative security is the same thing the Microsoft marketing juggernaut means: because Grandma hasn’t been using Linux for the last decade, the devil you know is less frightening than the angel you don’t.

      It’s true that Linux is not the “perfect” security model. There are problems. If you want something more approaching “perfect”, what you need to do is start using something like OpenBSD instead. The OpenBSD developers take pride in being called “practical paranoids”, and with good reason: there has only ever, in the entire existence of OpenBSD, been one release of the OS that included a remote root exploit vulnerability in a default install. One. Period. And nobody has ever, as far as I’m aware, actually exploited it. OpenBSD has to sacrifice much to maintain that level of security, however, and some of that is outside the realm of what the common end user is willing to put up with. It does make an interesting point, though: OpenBSD has no need of antivirus software. Why should Linux, or even Windows, need it? The reason Windows needs it is simple, unfortunately — in terms of security, it’s Swiss cheese.

      Direct quote: “We could do with a cheaper, better solution than “Win-doze.” And if we jump, we don’t want to be hosed by a bunch of script kiddies jumping on the Linux bandwagon. Prove we won’t be.” My response to that is a simple one. If the script kiddies aren’t hosing the Linux users now, now’s the time to switch. I can point out technical reasons all day for why Linux is a more secure platform for general purpose computing than Windows, but ultimately it’s mostly over the head of the average end user, and anyone that is looking for excuses to disbelieve can surely find them in the metric tons of FUD strewn about the information technology marketing landscape on a daily basis. All you really need to know, for the time being, is two things. One, if you switch, you’ll at least be a lot safer for now. All else being equal, temporary safety is better than none. Two, the common tasks of end user computing can all be accomplished on a Linux system, just as easily. You just have to be willing to try something new.

      And please, for the love of all that’s holy, all you end users out there stop using Outlook Express and Internet Explorer, even if you insist on continuing to use Windows. I’m tired of having to configure mail servers to discard the mountains of spam your computers broadcast to the Internet at large when compromised by remote code execution exploits that turn your computers into spam zombies. Seriously, it’s getting old.

      • #3071081

        A-Fisking We Will Go

        by jmgarvin ·

        In reply to A-Fisking We Will Go

        Gaming will be the thing that makes people jump ship.  I know,
        I’ve heard it all, but that will be what makes people leave Windows and
        go to Linux.

        The very second that I can yum install half-life-2 is the very second that people will totally jump ship and move to Linux.

        I wish more game companies would make Linux native clients for their
        games or would at least LOOK at creating Linux native stuff.  If
        Bioware could do it with Neverwinter Nights and Maddoc Software/Splash
        Damage/Activsion can publish Wolfenstein: Enemy Territory (for free no
        less), why can’t other companies release native clients as well? 
        Why can’t Blizzard create a World of Warcraft native Linux
        client?  Why can’t Microsoft create a …oh wait…THAT’S
        WHY!!  😉

        My point?  Red Hat, Debian, Slack, Gentoo, SuSe, et al need to
        team up and get on the ball getting native clients!  Someone has
        to take the helm and it has to be an established company! 
        Novell/SuSe, I’m looking at you!

    • #3115791

      Happy Halloween: I hope you have a job.

      by apotheon ·

      In reply to bITs and blogs

      Soapbox Graphic

      It’s Halloween. Let’s talk about something scary. Let’s talk about the future of our jobs and our industry.

      I’m going to assume I’m speaking mostly to information technology professionals, here. That’s what this is about: the IT industry and the people who’ve made a career in it. Things have been a big ol’ bucket of upheaval for the last decade or so, and it doesn’t look like things are smoothing out any time soon.

      There were some sudden growth spurts in the IT industry for a while, there. We saw the “dotcom” boom, and the associated incredible growth in the IT job market in the ’90s. Half of that was hype, which of course led to the subsequent crash when the hype evaporated. Such is life.

      Things are finally getting to a point where some of us have the ability to look at the future and say “Yes, I’ll still have a job next year.” To some extent, then, we’re recovering from that crash. The number of jobs is dwindling, though, at least domestically in “first world” nations like the US. Jobs are being outsourced and offshored, IT departments are still being whittled down a bit by accountants who consider the IT department nothing but overhead, and corporate consolidation is drastically reducing needed workforce once businesses get successful enough to be able to maintain market presence without innovating and advancing. Things look grim, but at least we’re not in freefall.

      I’m a US citizen, and I live in the continental United States. I’m a career IT professional, and what happens to the tech industries has a direct effect on my livelihood. The future of the IT industry worries me.

      Companies like IBM, Dell, SBC, and other IT industry giants have been cutting their own throats. A vicious cycle has been started, and it’s just getting worse from a corporate infrastructure standpoint. Someone figured out that short-term costs could be cut down by farming out tech support to overseas sweatshops, paying pennies on the dollar for the same number of people doing what managers assumed any monkey could do. This became a trend that spread to more than just tech support, as everyone wanted to get in on the short-term savings. Pretty soon, tech support as a domestic job field was all but wiped out. Leaving aside for the moment that imprenetrable accents tend to alienate customers, this had a rather chilling effect on the IT industry: tech support jobs were the de facto “entry level” for IT professionals. Suddenly, there’s no way to get into the career field for many people.

      In the drive to cut short-term costs, thus increasing on-paper quarterly gains for the board of directors, other long-term inadvisable steps were taken. IT workers and managers have been worked harder and harder, with large swaths of the workforce being laid off, and thus had less and less to do with the hiring process when someone new had to be hired: as a result, more and more of the hiring process fell to Human Resources, which guaranteed that people would get hired based on resume-writing skills rather than technical skills. Then, of course, people complain that they’re not getting skilled workers, as they offer $25,000 salaries, nearly nonexistent benefits, and atrocious working conditions, and do a piss-poor job of screening candidates. Add to this the fact that there’s almost no such thing as an entry-level IT job, and you’ve got a recipe for — well, for exactly what we’ve got: a declining domestic industry.

      Now consider this: with the number of IT workers actually making a decent living in constant decline, a significant percentage of the technology customer base is disappearing. The technology early-adopters, the people who buy two computers a year when they’re well-paid, the people who talk their non-techie friends into buying something new and great, are almost all unemployed. The people who are enthusiastic about IT, who spend their money on it and are so invested in it that they can’t help but be good at their jobs, are the people most suffering. We’ve got an industry populated more by social climbers and gold-diggers than professionals, people who polish their resume-writing skills rather than reading trade rags and technical manuals for fun, who don’t need the latest and greatest computer at home because when five o’clock rolls around they punch out and go to a football game instead of playing with technology.

      Thus, the industry declines, and costs need to be cut again. Thus, more of us are laid off, and sales decline. Thus, the industry declines, and costs need to be cut again. Lather, rinse, repeat.

      There might be an upside to this, though. Don’t hang up your crimpers and turn in your laptops for a career in the lucrative field of sanitation engineering yet. Believe it or not, open source software might save your job.

      Tech support and proprietary software development will always be easily offshored. On-site support can’t be offshored, however, and open source software development doesn’t see any cost benefit for being offshored because it doesn’t cost anything. On the other hand, open source software development creates opportunity for more on-site development and support.

      Here’s how it works: When you’re using closed-source software, you can’t look at the source code. You can’t modify it. You buy some shrink-wrapped solution, and you use it as-is. There’s no such thing as customization for the purposes of your business. What you see is what you get, and you get to see precious little. You could always have a development team in your company to design things from scratch so that you can create exactly what you need, and modify as needed, but then you’re in the software business and, frankly, reinventing the wheel is extremely expensive in terms of both time and money. It would be so much better to just be able to improve on a wheel that already exists.

      That’s one place where open source software comes in. Not only can you get your software for free, but you can modify it to suit your purposes as well. You can write your own interfaces, you can add functionality, and so on, and you don’t need to create the entire application framework from scratch to do it. Better yet, you can get some of the foremost experts on a given software type working for you quite easily: hire from the pool of developers for the open source software you want to use. This turns open source software development into entry-level work for the IT industry, it creates an efficiency benefit for companies using open source software by allowing them the ability to cheaply tailor software to their needs, it provides a simple way to be sure you’re getting demonstrated talent for your developers, and it cuts costs by allowing companies to save money they’d otherwise be spending on shrink-wrapped software that was only half of what they actually wanted.

      Not all IT workers are developers, of course. I do the occasional bit of development myself, but that’s never really been what sold me to clients when I was consulting, and it’s not what got me my current job as an IT resource manager and network administrator. There’s hope for us (mostly-)non-developers, as well, though. As more developers get hired to make business processes more efficient, more computers are being used. More network administrators, more technicians, and more on-site supporters will be needed. As tasks like accounting, business process tracking, and communication become easier and more streamlined, IT professionals will become more in demand and may even end up replacing accountants, business administrators, and switchboard operators. For software-specific support tasks that don’t require permanent staff, domestic consulting will become more and more important: outsourcing doesn’t have to be overseas, after all. Look at the increase in the number of contract-basis consultancies and IT service providers domestically in the United States, and you’ll see it attached at the hip to the increase in open source software uptake in the country. Look at IBM: they’ve become more and more a total-solutions provider and supporter, selling support contracts along with open source integrated network solutions as package deals. Look at Google: its entire business, growing exponentially, is built on a basis of open source software, including huge Linux server farms, with massive in-house development. Google doesn’t even sell the software it develops, it just uses it to provide services, and it’s hiring people like crazy.

      If you want to remain relevant, you’re going to have to learn some new skills. That’s a given. The upside is that learning those new skills won’t be a waste of time, if you choose your skills well.

      Now, let’s have another look at the downside. There are huge corporations with lots of financial and political clout that have a vested interest in opposing this revolution in the industry. Yes, Microsoft is the canonical example, but Microsoft isn’t alone. These corporations are not interested in a world where their proprietary, closed source software vendor business model is becoming increasingly irrelevant. They want to put a stop to it, and the only way to do so in the long run is through legislation and anti-competitive business practices. Rather than innovate, they’ll stifle innovation. Rather than compete on the merits of what they produce, they’ll get legislators and judges to outlaw and punish anything with more merit.

      I don’t think they can win in the long run. I think it’s a losing game. In the meantime, however, things might get worse before they get better. Think about the chilling effects of legislation like the Digital Millenium Copyright Act, which saw researchers dragged into court for doing analysis of encryption algorithms. Think about the way the Trusted Computing initiative incorporates as much licensing enforcement as security enhancement, if not more. These things are happening, and they’ll continue to happen as long as corporations with a vested interest in hampering the open source development community can get away with making them happen.

      Yeah, it’s kind of scary, but the more of us in IT there are who are aware of the consequences of that sort of behavior, and who work to promote a world in which our skills are needed and relevant rather than letting ourselves be herded into tiny little pens to await the slaughter, the less damage such unscrupulous practices can do.

      It’s something to think about, anyway.

      Want to see who’s next On the Soapbox? Find out in the Blog Roundup newsletter. Use this link to automatically subscribe and have it delivered directly to your Inbox every Wednesday.
      Subscribe Automatically
      • #3116598

        Happy Halloween: I hope you have a job.

        by mfurman ·

        In reply to Happy Halloween: I hope you have a job.

        I agree with most of your article.  I think that cross training
        yourself will help you keep your house.  People outside IT look at
        IT as speaking another language.  Sure they can’t do it, but if
        “they had the schooling” they could do it along with their current
        job.  Most don’t realize that even doctors can muddle through
        their work without any new training for years at a time.  If an IT
        pro sits on their hands for even 6 months, they will be behind. 
        So think about getting some sales skills.  Try making an excuse to
        go to that client with the sales manager.  Hit your company up for
        logistics training and other “catchy” terms like Kaizen, 6 Sigma, and
        Lean Office. 

        It’s working for me, I hope others try it and it works for them as well.

      • #3116535

        Happy Halloween: I hope you have a job.

        by alangeek ·

        In reply to Happy Halloween: I hope you have a job.

        I believe the previous poster is mistaken about doctors not needing training; most, if not all, medical personnel are required to get training every year to keep their skills up-to-date. I believe this is actually required by law.

      • #3116481

        Happy Halloween: I hope you have a job.

        by icubub ·

        In reply to Happy Halloween: I hope you have a job.

        When you add in the declining enrollment rates for Computer Science,
        Engineering and Technical degrees at our US universities, it does look
        bleak. However, if you throw in the fact that many of the baby boomer
        era are nearing retirement age, and that many in Federal or government
        positions are looking to retire in the next 5-10 years, there will be
        great opportunity for IT people. You just have to be patient, and
        develop up those skills. Try learning more about security, compliancy
        issues (SOX, HIPAA, etc.) that all companies will eventually have to
        face, those areas where you are weak and need to beef up. Look for
        areas of opportunity within your company where you can improve upon
        business processes, or simplify things for your users.

        Most importantly, start thinking like a businessman. Treat the company
        like it was your own! Learn more about operations and how things work.
        Make friends with the accountants down the hall, or talk to the Sales
        guys about how to improve their lives. IT is no longer about
        implementing the latest and greatest technology, but making technology
        work for the end users and the company. Take a class on making
        presentations, or join some professional organizations to help build
        your presentation/speaking skills.

        Just my $0.02 worth as well! 

      • #3136523

        Happy Halloween: I hope you have a job.

        by lfeldman9 ·

        In reply to Happy Halloween: I hope you have a job.

        Dear Apotheon,

        You say:

        “the number of IT workers … (is) in constant decline”.

        But the U.S. Bureau of Labor Statistics in their Occupational Outlook Handbook, 2004-05 Edition (at URL: http://www.bls.gov/oco/home.htm ) says that:

        “Employment of programmers is expected to grow about as fast as the average for all occupations through 2012.”

        and

        “Computer software engineers are projected to be one of the fastest
        growing occupations from 2002 to 2012. … Despite the recent downturn
        in information
        technology, employment of computer software engineers is expected to increase much faster than the average for all occupations…”

        and

        “Employment of computer support specialist is expected to increase faster than the average for all occupations through 2012…”

        and

        “Computer systems analysts, database administrators, and computer
        scientists are expected to be among the fastest growing occupations
        through 2012. Employment of these computer specialists is expected to grow much faster than the average for all occupations…”

        According to their chart
        (see chart below or at URL: http://www.bls.gov/oco/oco20016.htm) that
        means that the BLS is predicting that employment of computer related
        professionals will increase by 10 to 36 percent or more (depending on
        specific occupation) during the interval of 2002 to 2012. In no way can
        that be considered a “constant decline”.

        While I sympathize with your point of view and know that many IT
        professionals have suffered hard times recently (myself included),
        perhaps we should not descend into a state of complete gloom and
        doom.

        The limited, anecdotal evidence of IT unemployment that any of
        us have as individuals is not a sound basis for decision making.
        And the BLS data seems to refute your assertion.

        Do you think that their data are wrong or that they are lying to us?
        The BLS is in the business of collecting employment data and making
        employment trend predictions. I trust their data. Don’t you?

        Regards,
        lfeldman

        Changing employment between 2002 and 2012

        If the statement reads: Employment is projected to:
        Grow much faster than average increase 36 percent or more
        Grow faster than average increase 21 to 35 percent
        Grow about as fast as average increase 10 to 20 percent
        Grow more slowly than average increase 3 to 9 percent
        Little or no growth increase 0 to 2 percent
        Decline decrease 1 percent or more
      • #3136509

        Happy Halloween: I hope you have a job.

        by apotheon ·

        In reply to Happy Halloween: I hope you have a job.

        I have a couple of notes for the previous commenter:

        First, notice that all of those quoted statistics say “expected” and
        the like. They’re prognostications, not hard and fast facts. They may
        or may not come to pass as predicted and, even if they do, they may not
        particularly apply to the domestic industry in the manner presented.
        Lies, damned lies, and statistics: aside from the potential for
        misapplying statistical analyses (projected need for programmers
        doesn’t necessarily mean increased domestic employment of them),
        there’s always the likelihood of a whole lot of spin. Government
        industry prophets work to get people elected to second terms, and to
        get extra funding for their bureaus based on a perception of increased
        relevance, while private industry prophets work to get people to pay
        them for their prognostications. Even when they’re telling the truth,
        they may not be telling all of it. Completely aside from all that,
        though, there’s my second point.

        Second, notice that I was talking about the present and the recent
        past when I said “decline”, using words like “is”, while the statistics
        cited refer to the future, using words like “will”. The two don’t even
        conflict. I don’t see how that chart in any way contradicts my
        reference to employment in the IT industry being in decline.

        I’ll throw in a bonus point for you: Third, you might take note of
        the fact that those projected statistics refer to specific narrow job
        types, not to the IT industry as a whole (not all of us are
        DBAs, for instance).

      • #3136408

        Happy Halloween: I hope you have a job.

        by bartlmay ·

        In reply to Happy Halloween: I hope you have a job.

        I’m confused by something.  Whenever I hear complaints about the IT industry I hear the decline of the programmer.  Since when is the IT industry only about the developer?  In the 17 years that I have been directly involved with computer related technology I have seen three different areas.  Developer, Break/Fix technician and Network Administrator. 

        When I graduated High School I was on my way to become a programmer.  The Navy got ahold of me and my greed set in.  That is another story and I departed the technology industry.  A few years after my discharge I got back in as sales and progressed from there.  To make a long story short I now have my own business as a Independent Information Technology Consultant. What I mean by that is that I suggest what to buy, install what was bought and repair what needs repairing.  I do this for small businesses and residential customers.  Just because I do not create or manipulate software does not mean that I am no longer in the IT industry.

      • #3136392

        Happy Halloween: I hope you have a job.

        by tkirkpat ·

        In reply to Happy Halloween: I hope you have a job.

        The part of IT I?m most disillusioned with is the
        management.  I used to manage an IT
        department of 15 people.  Small, by a lot
        of accounts, but we struggled to support a fairly large infrastructure, mixture
        of platforms, etc. that was not only giving a lot of work-related satisfaction,
        but kind of fun at the same time.  It was
        a great team effort.

        As the quantity of PCs, Macs and the infrastructure grew,
        corporate began downsizing IT.  They
        added IT Directors to head departments that had little if any hands-on IT
        experience.  Their main focus was cutting
        costs.  I figured this would come through
        advancements in the industry, but it just turned into a hatchet-wielding
        nightmare, beginning with basic support jobs and ending at my original manager
        position being turned in to a supervisor/analyst position, to no position at
        all.  The IT department went from a place
        with great teamwork, support and customer service to one of sheer drudgery,
        worry and ignorance.  It was also pared
        down from the original 15 to five?maybe two of which have any talent at all?or
        at least ?seasoned? enough to continue aiding in the work.

        That?s what you?ve got left to work with.  Employers looking for immediate expense cuts
        running the risk of a long and more expensive future with antiquated talent and
        equipment that will drive costs even higher. 
        Prospective employees just out of college with no work experience that
        expect $50K ? $60K per year to start or ones that have been around the block
        and weeded out of organizations because they didn?t fit in or just couldn?t do
        the work.

        If I sound bitter, it?s probably because I am.  I?m one of those IT ?professionals? that
        struggled to stay with their company?the same company mind you?for about 17
        years.  They cut my resources so much
        that I had to leave or go insane?but come to think of it, I would have fit in
        perfectly once the insanity hit!

      • #3136301

        Happy Halloween: I hope you have a job.

        by bheite ·

        In reply to Happy Halloween: I hope you have a job.

        I agree with the general thread here. However, it is not just IT in this position. There is a trend in all business that the business itself creates these magical positions that have no real added benefit to the company. I work for a major (some say ‘the”) manufacturer of microprocessors and we have probably a 50% overload in management and engineering people. The “boots on the ground” numbers seem to be the real shrinking factor.

        As far as statistics goes, remember, these are the same people who say inflation is at 3%, while I have calculated this years rate (mainly energy, interest rates and costs passed on to me by others because of these two) at closer to 25%. So their accuraccy is as good as a dart board.

        I do a lot of freelance IT work for people in a one on one way, and make a little at it, but will rather take that than work to support the bloated management structure American Companies seem to think they deserve and need. Look at a lot of your daily interactions and you will see almost all of the symptoms illustrated in many areas, from automobile repair to tech support for devices you have purchased. The quality and effort to take care of the customer is the first casualty of the “herd” management hired today, and the result is such brilliant thinking that results in the offshoring of those pesky “support” people. I invite you to go look at Dell’s wonderful reputation as seen on the hundreds, if not thousands of posts on the various web sites devoted to just “Dell Hell”. A vast majority of that comes from their offshoring. Dell is the poster child for poor management and bad decisions, and is illustrative of the comment of how cutting costs is akin to cutting their throats. IBM bailed out specifically because they could not fathome quality and customer support as a part of any product they make. Dell just hasn’t imploded enough yet to get there.

        Good overall points about a subject no “herd manager” wants to discuss, or can even envision.

      • #3136244

        Happy Halloween: I hope you have a job.

        by ali40961 ·

        In reply to Happy Halloween: I hope you have a job.

        LFeldman,
        Do u have any idea where the BLS gets those figures? They get the data
        they base those numbers on from businesses. As the person who reported
        those numbers to the BLS for the company I worked for, I can tell you
        that there is a lot of room for error. First of all, not all companies
        that are required to report, do so. Secondly, the company I worked for
        did not spend a whole lot of time making sure that the report I used to
        generate the numbers, worked as it should. I pointed out many times
        where errors were occurring. The general consensus was, oh well, who
        cares. When I mentioned this to my contact at the BLS, I was told, just
        give me what you can.

        Reporting to the BLS is required, BUT there are NO ramifications if
        businesses DO NOT report. So many companies do not waste their time.

        As you can see, the numbers being given are a “best guess” and probably have no basis in reality.
        Sincerely,
        An unemployed IT professional

      • #3136690

        Happy Halloween: I hope you have a job.

        by campbellcanuck ·

        In reply to Happy Halloween: I hope you have a job.

        A lot of the discussion about the state of the IT industry today seems to ignore the past and how the IT industry in general conducted itself over the past 10 years.  In large measure, the industry itself is responsible for the broader business community pushing IT down the list of priorities and not giving it much respect (which is the general root cause of outsourcing and cost-cutting).  It is responsible because of the greed that pervaded the industry during the Y2K scare and the dot.com booms.  Businesses were repeatedly told that product X was the going to save them billiions of dollars per year.  OR that if they don’t do Y, they were going to be obsolete and the business would die.  OR you have to buy Y.  Thanks for the cheque.  Oh btw, that also means that you have to buy A,B & C (which are more expensive than Y is).

        My point is that after an era of broken promises, little ROI and being generally misled about the costs, benefits and implications of IT technology, how many people are still going to trust this industry?  Sure the industry has changed, and is addressing the issues.  But it will take time for the mistrust to dissipate.  When it does and when the business world has a better perception of IT, the industry will become a valued part of any large business, almost all medium business and most small businesses.  Until then suck it up, people and work to make things better.

        Respect is not given, it’s earned.

      • #3137550

        Happy Halloween: I hope you have a job.

        by leslieevan ·

        In reply to Happy Halloween: I hope you have a job.

        At the risk of sounding Pollyanna-ish, I’d like to point out that if
        all else fails, everyone who contributed to this discussion thus far
        could certainly find employment as writers!  How lucid can you
        get?  I’m impressed.

      • #3137516

        Happy Halloween: I hope you have a job.

        by mark miller ·

        In reply to Happy Halloween: I hope you have a job.

        You may be right about open source offering entry level opportunities where other such opportunities have been cut off. Conceptually, I don’t entirely agree with your analysis. It’s been said many times that companies get open source because it’s “free beer”. They like it because it costs almost nothing to get it legally. Just taking it to its “logical” conclusion it could encourage offshore outsourcing, since the company got the OSS for free/low-cost, why not continue the idea by getting low-cost labor to work on it? I put “logical” in quotes because my own gut estimation of the kind of analysis companies do before they decide to offshore I think is pretty shallow in most cases. They offshore/outsource the wrong stuff for the wrong reasons. Typically outsourcing is best when you’re looking to increase quality of service, or just offload something that’s not your core competency (not central to your business). I talked with the guy I’m working with now, who has a degree in economics, about this subject a year ago. He said that outsourcing just to cut costs is the last thing a company really should do, because in the long run, outsourcing actually will end up costing more than to do the same thing in-house. The question is, is it worth it? In the cases where outsourcing is used properly, the increased cost is worth it, because quality of service goes up as well.

         

      • #3137198

        Happy Halloween: I hope you have a job.

        by lcave ·

        In reply to Happy Halloween: I hope you have a job.

        I just don’t agree.  While it’s true that entry-level jobs are being outsourced; have you not heard what’s going on in the support industry.  The offshore support is so poor and there is so much repetitive training involved, not to mention supervision, even short-term savings are not being realized by companies. 

        I predict a huge return to entry-level jobs in first-world countries.  We have always had some sort of open-source software!  This is not new.  My advice:  Don’t put all your eggs in one basket.

      • #3137189

        Happy Halloween: I hope you have a job.

        by wayne m. ·

        In reply to Happy Halloween: I hope you have a job.

        Not Just IT

        I agree with the comments on outsourcing, but I also want to point out that it is a much wider ranging problem than IT and IT was not the first industry affected.

        Look at the PC in front of you.  How much of it is US manufactured?  The monitor, the hard drives, ICs, and motherboards are all manufactured off shore.  The US no longer manufactures TVs.  The automobile industry is in decline.  Steel is imported.  In a recent article in the Washington Post (11/6/05), companies are starting to export health care.  In some border locations, companies are requiring US employees to go over the border to Mexican doctors’ offices.  That some of the employees actually prefer Mexican doctors to US assembly line medice is another, related issue.

        Meanwhile, we are told that this is good for us.  It is a good thing to send these jobs overseas.  We each should retrain for an ever-shrinking set of well paying jobs.  You don’t have bread, why not eat cake?  I am proud to say, though, that our Texas based politicians do draw a line.  We can offshore everything except oil!  Here is where we take a stand!

        There has been an ongoing dehumanization of business in America.  Loyalty, trust, reliability have been replaced with clever manipulation on streadsheets and accounting ledgers.  This is the uderlying problem, and though I have respect for open source software, it is not going to resolve the problem.  Businesses need to rediscover service to society and abandon PR spin and accounting tricks.

      • #3117150

        Happy Halloween: I hope you have a job.

        by sysengineer ·

        In reply to Happy Halloween: I hope you have a job.

        The social climbers and gold diggers need to get the hell out of the IT industry so that the way can be paved for people with a genuine interest in technology, people like me, who after working with computers all day, go home and read the trade rags, study for the next certification and do something everyday to better myself and knowledge of the IT industry. Take the social climbers for instance..good at reading office politics, staying out of the way and kissing ass on their way up the corporate ladder. Go get your MBA, take a job at Starbucks and leave the IT work to the real professionals, the one ones who have made a career or would like to make a career out of working and building technology. Shall I say more…

    • #3118954

      Web Scripting: Another Look

      by apotheon ·

      In reply to bITs and blogs

      I remember the naive excitement of all things Web and Dynamic, back in the early days of the dot com boom. I remember that Internet Explorer was the new undisputed champion that rose from the ashes of a brief, vicious, and very one-sided browser war. I remember DHTML, which some people were calling HTML 5.0. In those days, DHTML meant HTML and Javascript on Internet Explorer, since nobody who used DHTML could get everything to work on other browsers or, for that matter, much cared if it worked on other browsers. Oh, how times have changed.

      A number of problems began to develop. People became increasingly disenchanted with the way Internet Explorer’s handling of HTML 4.0 and Javascript changed from one minor IE revision to the next. People started to notice that Javascript was behind all those horrible, annoying advertising popups that were creating unremovable porn-site ad windows covering up everything else they were trying to do. People started to get hit with arbitrary remote code execution exploits made possible by unscrupulous use of client-side scripting with Javascript and ActiveX. People started noticing that, by coding only to the standards of a fully activated, unsecured, plugin-heavy, latest version Internet Explorer, they were losing increasing amounts of business. Something had to be done.

      Ever-larger segments of the web development community were rethinking their approaches to putting together a website. Server-side scripting was becoming the thing of the day. A lot of people were going back to the temporarily neglected Perl/CGI, PHP started getting linked in people’s minds with MySQL in a way that made them almost indivisible, and ASP became the Microsoft loyalist’s web development rallying cry. Soon, other players started entering the scene: we had Python, JSP, and even a smattering of Scheme and Ruby server-side scripting.

      Meanwhile, web browsers were changing, both in configuration and, eventually, in basic design. Opera gained a small, but extremely loyal (some might even say “rabid”) following. People started simply turning their browsers’ Javascript interpreters off. A lot of people simply didn’t bother to install plugins like JVM and Flash. Konqueror and Mozilla started nickel-and-diming their way into a tiny little slice of browser market share   particularly Mozilla, which was based on Netscape code as well as being cross-platform compatible. The much ignored (by IE coders) World Wide Web Consortium’s web standards started to become relevant, particularly to web surfers with disabilities and the developers who wanted their business. Netscape was revitalized, just a little bit, by the influence of Mozilla’s rapid improvement on its original codebase. The Phoenix, then Firebird, then Firefox browser project spun off Mozilla and became the rousing mindshare success the Mozilla suite never was. Now, some browsers even allow you to turn specific parts of Javascript functionality on and off, and ActiveX is widely recognized as the unnecessary security risk it is.

      People design pages using XHTML now (like HTML, but built from XML, and making more sense than HTML ever did) with CSS for client-side dynamic behavior. It became very unpopular and frowned-upon to use Javascript unless you were in the business of taking advantage of the credulous public to earn your ill-gotten gains as a spammer. LAMP   Linux, Apache, MySQL, and PHP (or Perl or Python)   has become a force to be reckoned with. Secure web coding that doesn’t invade privacy and doesn’t use plugins or Turing-complete client-side scripting languages has become the order of the day. XML, other than XHTML, is in a state of flux somewhere between the Next Big Thing and Unnecessary Fluff in the minds of most web developers, though for some reason it’s seeing impressive growth rates as the format language of choice for a whole bunch of things for which it should never be used (like network protocols).

      It’s not over yet, though. Some things are happening. Agile web devlopment with Ruby on Rails is becoming very popular in certain circles, and seems to be the best server-side web development framework since the original webserver, at least according to the hype, and I’m inclined to agree. Google has returned the oomph to Javascript by proving that, even with all but the most basic functionality of Javascript turned off in your browser, you can design wonderfully dynamic browser-based web applications that run quickly and smoothly, all without stepping on anyone’s toes. Suddenly, Rails and AJAX are the tools of the day. This is what everyone wants to learn and use: either AJAX or Ruby on Rails. At least, everyone with an eye toward the cutting edge of web development wants to use one or the other.

      Do we really need this stuff?

      Well, we do need a client-side scripting language, and Javascript is what we’ve got. Asynchronous data transport opens the activity of web design to whole new levels of dynamic eye candy functionality. AJAX is Asynchronous Javascript And XML, and even XML is sort of a necessity at least insofar as it plays a part in XHTML. I really don’t think we need all of XML in there, though. This is an infusion of the “new toy revivified”, where XML is having trouble making inroads because all it seems to do is duplicate the functionality of HTML and some CSS, with a bit of extensibility. It tends to make markup hard to read, and webpages hard to write. It has all the problems of a dynamic syntax with few, if any, of the benefits of dynamic execution. Maybe this isn’t the way to go (aside from XHTML, of course — thank whatever you hold to be sacred for markup that makes sense).

      I’ve been a pretty dedicated opponent of Javascript in web design for a while, but I’m rethinking that. My reasoning was that we didn’t need a fully Turing-complete language interpreter in our browsers that had hooks into all kinds of crazy local system capabilities, the way Javascript did. With Firefox (for instance), though, we now have an approximation of what I said we actually did need: by making sure Javascript is turned on, but everything (or nearly everything) in the Advanced configuration for Javascript is turned off, you basically get the functionality of the language I felt we needed, with the limitations I felt it should have. It’s not perfect, but nothing ever is. Combine neutered Javascript with CSS, and you’ve got a heck of a lot of functionality with very little risk at all. There’s hope for Javascript yet.

      I’m still not keen on full XML functionality. I think we’d probably be better off able to deactivate, or not even install, full XML support in our browsers, and only support XHTML. Maybe I’ll rethink that some day, but for now that seems to be the case. We might at least think about designing without any XML aside from XHTML until such time as people start figuring out that unfettered XML use shouldn’t be applied, willy-nilly, to absolutely everything under the sun, but considering the way people still haven’t learned that Java isn’t the all-singing all-dancing panacea Sun claims it is, I don’t hold out high hopes that’ll happen. Still, having the ability to turn off non-XHTML support for XML in a browser might have the same positive effect on the XML-heads that turning off Javascript support did on the guys in the late ’90s who tried to use Javascript for absolutely everything, just so they could create ever-uglier webpages that took exponentially increasing times to load, introduced geometrically increasing avenues for security exploits, and made websites ever-more difficult to navigate. In the meantime, maybe we should focus on the dynamic functionality of our client-side design, and stop screwing around so much with a markup language with dynamic syntax when all we’re doing with it really is producing unreasonable facsimiles of functionality other tools provide with far greater efficiency, elegance, and ease.

      Couple this with server-side back end scripting that dramatically reduces the code you have to write without gutting its flexibility and functionality, or slowing it down to the speed of a Visual Basic crawl, like you can get with Rails. Maybe call our front-end, client-side tool AJACSS: Asynchronous Javascript And CSS. XHTML is assumed these days, anyway, and really you could as easily make do with old-school HTML 4.0 if you wanted to. The point of markup, after all, is just to point at remote files. CSS provides (almost) all the formatting you need, and Javascript makes it dynamic.

      Yeah, let’s see if we can do a lot with very little. Minimal cruft for maximum functionality. Speed and elegance over buzzwords and byzantine code.

      For the love of all that’s holy, don’t use something just because it’s on the cover of a magazine. That’s how we ended up with server-side Java and arbitrary remote code execution with Javascript and ActiveX in the first place.

      We’ll call this Perrin’s First Law of Web Development: Don’t use tools you don’t need.

      It’ll only end in tears, otherwise. Well, either tears or ugly, slow, spaghetti-code websites that are a significant security threat.

      • #3118923

        Web Scripting: Another Look

        by illilli ·

        In reply to Web Scripting: Another Look

        My programming group probably consists of dinosaurs, but we’ve been programming with Client-Side Javascript and CSS for many years.  Due to our customers’ environment, we use ASP for the server-side programming linking to SQL Server.  I’ve found that we can do just about anything we need to do with this configuration.  Granted, we aren’t running any huge volume websites, but there are plenty of web-based tools that people need that we are able to provide.  I find that simple works best.

        Anyway, thanks for the blog, it was interesting reading. 

      • #3117261

        Web Scripting: Another Look

        by jaqui ·

        In reply to Web Scripting: Another Look

        um, xml is not restrictive, it’s a few restricted tags, and a layout frequirement for well formedness.
        highly extendible.
        it is designed for being used as the foundation of a dtd to suite any documenting needs for any industry.
        using it as a transfer protocol isn’t a designed purpose.

        personally, javascripted, flash driven, vbscripted or activex required
        websites just cost the company any chance of doing business with me.
        your site, your server can run it.
        no clientside scripting allowed.
        ( unless the company that owns the site are willing to pay to use my
        cpu to run their web app as required by distributed computing business
        rules )

        section 243 of the Criminal code of canadasays that without a signed
        hard copy contract, clientside scripting is “unauthorised access” so
        the javascript, flash, vbscript, activex control driven sites are
        illegal.

      • #3123607

        Web Scripting: Another Look

        by apotheon ·

        In reply to Web Scripting: Another Look

        um, xml is not restrictive

        Who said it was? I’m not really sure what you’re trying to say, or
        what point you’re trying to make. Are you arguing against some
        perceived statement that XML is restrictive (which I don’t think anyone
        said), or comparing it to something else that you see as more
        restrictive and thus bad, or something else entirely?

    • #3127547

      a day in the life of the Internet: snort

      by apotheon ·

      In reply to bITs and blogs

      I was checking my laptop’s snort logs this morning, as I typically
      do, and decided that one of the tables of events would be enlightening
      for some of you out there who have incorrect assumptions about security
      and technical worth of various technologies. Keep in mind, as you read
      this, that the more poorly patched implementations of something there
      are out there on the Internet, the more attack variations there are
      going to be — and the more vulnerable the technology is,
      fundamentally, the more attack types per attack method variation.

      Keep in mind, also, that this is just the stuff that showed up from
      within the two different networks to which I connected this laptop
      yesterday: one at home and one at work, both protected by firewalls.
      Imagine the hilarity that would ensue in my snort logs if connected
      directly to an ISP without any intermediary firewall.

      %      # of  method
      =====================================================
      97.93 42783  MISC UPnP malformed advertisement
       1.50   655  POP3 TOP overflow attempt
       0.25   108  SCAN UPnP service discover attempt
       0.09    40  WEB-IIS %2E-asp access
       0.03    12  WEB-PHP viewtopic.php access
       0.03    11  MS-SQL Worm propagation attempt
       0.03    11  MS-SQL Worm propagation attempt OUTBOUND
       0.03    11  MS-SQL version overflow attempt
       0.02     8  WEB-PHP viewtopic.php access
       0.02     7  ICMP PING CyberKit 2.2 Windows
       0.01     6  WEB-PHP viewtopic.php access
       0.01     5  WEB-PHP viewtopic.php access
       0.01     5  WEB-PHP viewtopic.php access
       0.01     5  WEB-PHP viewtopic.php access
       0.00     2  WEB-IIS %2E-asp access
       0.00     2  WEB-PHP viewtopic.php access
       0.00     2  WEB-PHP viewtopic.php access
       0.00     2  WEB-PHP PHPBB viewforum.php access
       0.00     2  (http_inspect) OVERSIZE CHUNK ENCODING

      Here’s what some of the more difficult-to-parse stuff on that list means:

      • MISC UPnP malformed advertisement \u2014 This is an attack on a
        Microsoft Windows Universal Plug and Play buffer overflow
        vulnerability. This type of attack is attempted by way of the Windows
        XP Internet Connection Sharing client. When you set up WinXP to share
        its Internet connection with other computers on the network, you are
        prompted to install this client on other Windows machines (Windows 98,
        98SE, and ME) or to use the already-present client on WinXP machines
        that will be sharing the connection. Thus, machines with WinXP ICS
        client installed and UPnP running are vulnerable to this buffer
        overflow attack. There is a patch for this, but as far as I’m aware it
        doesn’t usually get installed due to the fact that with non-XP systems
        patching is sort of a lost art.
      • http://www.snort.org/pub-bin/sigs.cgi \u2014 This is a generic overflow
        attack aimed at older, vulnerable implementations of POP3 mail servers.
        If you have a reasonably new version of a reasonably well maintained
        mail server, this should not affect you. Of course, a lot of computers
        (regardless of OS) have old, clunky Mail Transfer Agents running a POP3
        service on them. It’s worth checking what you’re running.
      • SCAN UPnP service discover attempt \u2014 This isn’t so much an attack
        as a probe to determine vulnerabilities. There’s a myriad of UPnP
        vulnerabilities out there, and if any are found by this scan, you’ll
        likely get hit by an actual attack shortly following this event. UPnP,
        by the way, is a Windows-only service. Just in case you weren’t clear
        on that.
      • WEB-IIS %2E-asp access \u2014 Microsoft introduced a new vulnerability
        to IIS with a hotfix meant to plug up another security hole some time
        ago. That new vulnerability allows remote access to Common Gateway
        Interface scripts running on the IIS webserver, such as ASP,
        ColdFusion, and Perl scripts. This is commonly done to gain access to
        usernames, passwords, and data source names, so that your webserver can
        be rooted.
      • ICMP PING CyberKit 2.2 Windows \u2014 This is basically a “check to see
        if there’s a target there” probe. An interesting point here is that it
        doesn’t actually target Windows at all: rather, it is run from a
        Windows machine. If you’re running a fairly well-featured firewall,
        such as iptables or pf, you can easily configure your system to simply
        drop all incoming ICMP requests. If you’re trying to make do with
        something like Windows Firewall or ZoneAlarm, on the other hand, you’re
        pretty much screwed. The world knows you’re there.
      • (http_inspect) OVERSIZE CHUNK ENCODING \u2014 This is indicative of an
        attempt at a denial of service attack on older webservers. It affects
        both older versions of IIS and older versions of Windows-based Apache.

      I figure the PHP-based stuff is all pretty clear. These are
      PHP-based forum script vulnerabilities, mostly related to phpBB and
      mostly a result of some piss-poor language design and rather unskilled
      coders using the language (PHP is so accessible as a language that any
      knucklehead can write code in it).

      With the exception of the MS-SQL and UPnP attacks and scans, and the
      ICMP scan, this is all webserver-directed stuff. I don’t have a
      webserver running on this system, and on systems where I am running a
      webserver I don’t have any of these vulnerabilities. Many kitchen sink
      type Linux installs, and basically all Windows installs, are however
      vulnerable to some of these. I’m also not running a vulnerable MTA
      (updated Postfix on this machine), and because my MTA is for handling
      local mail only, it wouldn’t be accessible to the POP3 attack in any
      case. The PHP vulnerabilities in that list all require a webserver
      running PHP scripts. The MS-SQL worm and overflow attacks all require a
      Windows system running the Microsoft SQL DBMS. Since I’m running a
      “lean and mean” Linux install with exactly what I want installed, and
      nothing more, I lack these vulnerabilities on this system, and having
      an iptables configuration that by default has DROP rules for
      unsolicited incoming and forwarding connection attempts ensures that
      this stuff would all just be water off a duck’s back anyway.

      How well protected are you?

    • #3129293

      internet vs. Internet

      by apotheon ·

      In reply to bITs and blogs

      I ran across an article and discussion from (much) earlier this year that I hadn’t noticed when it was new. In particular, I refer to the suggestion that, because of spam issues, SMTP needs to be replaced. I have one word in response to this: balderdash.

      Perhaps I should elaborate, though.

      SMTP is a communication protocol, nothing more, nothing less. The recommendation of that article, and of many of the discussion posts that follow it, is that it needs to be replaced with a mail protocol that incorporates authentication and/or registration so that email security can be better assured. The article refers to layers of “band aids” being slapped onto SMTP in the form of spam and virus protection software. Some of the discussion posts refer to open relays and protocol field requirements that are too permissive.

      A few points need to be made in response to these complaints, I think.

      First, there’s a distinct difference between a protocol and an implementation. A protocol contains a definition of what data must be exchanged for (hopefully quick and efficient) identification of communicating systems and format of data that is exchanged so that the receiver knows what to expect. Whether or not that communication is accepted by the receiving system is up to the receiving system’s implementation configuration. In other words, issues with authentication and security, generally speaking, are the fault of a poorly locked down implementation on the receiving end. The data format exists in the form of the SMTP protocol for getting information to the receiving system: whether or not the data is accepted and passed on to users in the form of email is up to that system’s setup. If you have a problem with security, perhaps you should examine your mail server software. The protocol doesn’t solve the problem, just as the English language cannot be modified to solve the problem that people lie.

      Second, once you start including implementation definitions in something, it’s not longer a protocol: it is now software configuration. It’s best to keep your software architecture definition and your protocol definition separate. There’s a widely-known truism amongst software developers that modularity is good, all else being equal. It allows for greater flexibility, greater maintainability, and so on. If you want to see email fragmented into a slew of uncompatible forms that ensure people often cannot communicate with each other, go ahead and mix implementation with protocol: otherwise, leave well enough alone. SpamAssassin is data analysis, not a protocol “band aid”.

      Third, open relays make the Internet go ’round — not directly, but because they are the necessary symptom of allowing flexibility in a “free” Internet. The only way to prevent open relays is to actually firewall the entire Internet by routing all communications through a single, central point of failure. If you think email suffers security problems now, just try dealing with the security issues of painting a big red X on a single target that manages all email on the Internet. It’ll be down more often than it’s up, I guarantee it.

      Fourth, permissive protocol fields are, again, an implementation matter — not a protocol matter. If you want to be able to ignore all domain names that don’t fit the Internet standards, implement a mailserver that does so. No matter how you define the protocol, it must still be implemented. Even if your protocol definition clearly states that the last period-delimited segment of an FQDN suffix of the “From” field must be no more than three characters in length, that won’t actually change anything directly since when implementing a mailserver package someone will figure the protocol definition covers the problem and won’t hard-code the limit into the mailserver’s packet acceptance definitions.

      Fifth, even if you can magically (and I do mean “magically”, since that’s the only way it would happen) force all implementations to exactly match the standard — even Microsoft’s — you’d then have the problem that you’ve just eliminated the flexibility needed for many small networks to be able to use TCP/IP in closed network environments in manners unforseen by the common end-user who can’t see past his own nose and his direct connection to his ISP. Some of us run networks (private internets, with a small “i”) that only touch the Internet (big “I”) by way of a router/firewall, possibly with network address translation. Do you really want to have such a rigid definition that you can’t have privately defined, privately implemented email networks that don’t touch the Internet directly, but do have to deal with clients that touch the Internet? If you limit all email to officially designated top level domains, that means that suddenly you have conflicts between email from the Internet and email from a subnet on your local network, since both might have to be called blah@blah.com. As things currently stand, at least you can receive email locally from the Internet in the form foo@bar.com, your network (or internetwork) in the form foo@bar.commie, and your local computer in the form foo@localhost.

      That’s not to say that SMTP couldn’t use an overhaul. It just doesn’t need an overhaul for the reasons cited, and can’t be overhauled in the manner suggested. What it needs is some adjustment to make it match up more accurately with the first letter of the acronym, which stands for “Simple”.

      In fact, judging by that article and its responses, it looks to me like some of these shortsighted, authoritarian-minded individuals want to replace SMTP with ICMTP: Intentionall Complex Mail Transfer Protocol. No thanks.

      If you really think any of that would stop spam, anyway, you’re kidding yourself. I guess I shouldn’t be surprised by such misconceptions, though: the “there oughta be a law” crowd has always been more populous than those of us who have a clue why there isn’t such a law.

      • #3126909

        internet vs. Internet

        by jmgarvin ·

        In reply to internet vs. Internet

        I agree with most of your points however SMTP can be “fixed:”

        1) Since SMTP is plain text, we should start by “fixing” that. 
        Email messages should be encrypted out of the gate.  IPv6 should
        help in many respect here, but no more plain text messaging over
        unsecured lines

        2) SMTP is too easy to insert into.  It is FAR to easy to insert
        stuff into SMTP and NOBODY cares…I don’t know how to fix this other
        than to not use SMTP….I’m sure there is a solution, but I think it
        would be easier to create a new, more robust protocol BASED off SMTP

        3) Open relays are needed, but come on…I can setup a mail server
        anywhere, spoof a bunch of information, and I have just created a spam
        generator.  We need a way to authoratively deal with mail servers
        (something like DNS)

        Other than that I agree.  ICMTP is awful.  I thought it was
        just because it was young, but it seems to be getting worse…They are
        trying to create an everything protocol, and that would be far worse
        than what we have now.

      • #3128105

        internet vs. Internet

        by apotheon ·

        In reply to internet vs. Internet

        I’ll offer some clarifications of points.

        1. You can’t specify encryption in a general protocol definition because encryption methods change. Encryption is an implementation detail. MTA software might be developed that allows for encrypted packets, or you might just hack a way to configure what you’ve got to use an SSH tunnel when and where it’s appropriate, but you can’t really specify it in the protocol because you’re just imposing overhead that will end up obsolete as soon as someone can crack that encryption algorithm in a reasonable time period.
        2. You might have a point about SMTP being “too easy to insert into”. I’m not exactly the world’s foremost expert in SMTP, and am not familiar with this problem.
        3. We do need a way to deal with mail that can handle stuff like spam generators. I agree. Invent something for us. Heh.
        4. Other than that I agree. ICMTP is awful. I thought it was just because it was young, but it seems to be getting worse…They are trying to create an everything protocol, and that would be far worse than what we have now.” Yeah, no kidding. People, it seems, still have not learned the important lessons of modularity.

      • #3126386

        internet vs. Internet

        by master3bs ·

        In reply to internet vs. Internet

        I try to avoid commenting on blogs when I have little more to say than simply, “I agree.”

        Having said that; I agree.

        The difference between protocol and implementation is particularly well
        taken.  Combining the two can make things worse; and not doing it
        will only give an SMTP alternative with the same problems.

        What is really needed is better implementation, better spam
        prevention.  I’m not opposed to the idea of a new email protocol
        on the face of it, but the problems you pointed out with
        standardization would have to be addressed out of the gate.

    • #3128044

      simple iptables management

      by apotheon ·

      In reply to bITs and blogs

      In a previous entry to ITLOG, I mentioned that my secure iptables
      configuration on this laptop provided me with protection against
      attacks of the sorts indicated by my snort output. I’m following that
      up now with some information on how that setup was achieved, what it
      contains, and how you can manage such a setup without too much
      difficulty in a networked environment.

      In his own TR blog, jmgarvin provided some good tutorial information
      on iptables setup and management from the command line, using Webmin,
      and using Bastille. The beginning of it all can be found way back in
      May of 2005 at this link target.
      I’m not trying to do anything nearly that ambitious with this post, but
      I am going to show how I manage my iptables setup for laptops at my
      place of business. This is a quick and simple way to handle things,
      generally speaking, particularly when you have a bunch of computers to
      work with that all need to have the same, or similar, iptables setups.

      The first thing I do, of course, is flush my iptables setup with iptables -F -t filter, iptables -F -t nat, and iptables -F -t mangle.
      As jmgarvin pointed out, specifying “-t filter” isn’t technically
      necessary, since it’s the default target for iptables commands.

      Next, for the work systems, I created a reasonably secure iptables
      configuration using all the various iptables commands at my disposal.
      This is an operation you probably won’t have to go through once you
      finish reading this, because I’ll provide a simple template file you
      can use to set up your own iptables configuration.

      After testing that configuration out for a little while, I output it
      to a file using iptables management commands. As jmgarvin pointed out, iptables -L will give you information on the current state of your configuration, but I find that iptables-save
      produces more useful output for many operations. This prints all the
      iptables rules you’ve entered to STDOUT (which, typically, means your
      monitor’s screen). You can use a redirect to save that data to a file,
      which you should save in an easy to remember location. On some setups,
      the /root directory (root users’s home directory) might suffice. At work, I tend to use /var/lib/iptables, though I’m considering changing that to /var/local or /var/local/iptables. Another option, of course, is to save it in /etc
      somewhere, since it is arguably system configuration data. Save it
      under a filename that makes sense for you and for where you’re saving
      it, so that you won’t forget what it is based on the filename. If you
      have it in a location that includes the term “iptables” in the path, a
      filename like “saved” or “save.cfg” might suffice. Assuming you’re
      using “save.cfg”, your finally path to the file might look like this: /var/lib/iptables/save.cfg.
      That’s not what I call it, but my choice is really fairly irrelevant.
      To save the configuration data by way of iptables-save, navigate to the
      directory where you want to store it, then enter a command like this: iptables-save > save.cfg. In case you’re curious, you can use that “>” symbol (called a “redirect”) to save the output of any command to a file.

      Once you have that file saved, you can edit it to contain the
      iptables rules you desire at your leisure, and use the iptables-restore
      command to load them. By default, iptables-restore will flush your
      iptables before rebuilding them, so you don’t have to worry about
      flushing the tables before using that command to ensure that everything
      works the way you want it to. To configure iptables using
      iptables-restore, simply use a redirect (in the other direction this
      time) to point the contents of your previously saved file into the
      iptables-restore command, like so: iptables-restore < save.cfg.
      Because save.cfg is not a command or an executable script, but is
      instead merely a file filled with data that can be used by a command,
      it cannot be listed first at the shell when using iptables-restore. As
      such, you must use the leftward-facing redirect to point it “backward”
      at the command you are using. It might seem a little unintuitive at
      first, but after using redirects a little bit you’ll start to get the
      hang of it.

      Now, if that save.cfg file of yours is something you want to reuse
      for a lot of other computers, you should save a copy of it somewhere
      convenient. Then, to apply it to a new computer, all you have to do is
      copy it to the new system and, on that system, run iptables-restore < save.cfg
      just as you did on the first system. This completely replaces the
      iptables configuration in use with the new configuration, and even
      restarting the computer doesn’t undo the change. It stays put until you
      change it again.

      Here’s an example template of a save.cfg file for you, with some explanation:

      *mangle
      :PREROUTING ACCEPT [48436:11233990]
      :INPUT ACCEPT [48436:11233990]
      :FORWARD ACCEPT [0:0]
      :OUTPUT ACCEPT [29730:6162034]
      :POSTROUTING ACCEPT [29730:6162034]
      COMMIT
      
      *nat
      :PREROUTING ACCEPT [391:49336]
      :POSTROUTING ACCEPT [1793:110951]
      :OUTPUT ACCEPT [1793:110951]
      COMMIT
      
      *filter
      :INPUT DROP [0:0]
      :FORWARD DROP [0:0]
      :OUTPUT ACCEPT [1418:147349]
      -A INPUT -i lo -j ACCEPT
      -A INPUT -m state --state ESTABLISHED -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 631 -j ACCEPT
      -A INPUT -p all -s 127.0.0.1 -d 127.0.0.1 -j ACCEPT
      -A INPUT -j DROP
      -A OUTPUT -o lo -j ACCEPT
      -A OUTPUT -p tcp -m tcp --sport 22 -j ACCEPT
      -A OUTPUT -p tcp -m tcp --sport 631 -j ACCEPT
      COMMIT

      This iptables configuration essentially causes your computer to
      pretend, first off, that any incoming connections that have not been
      solicited by you just don’t exist. It also does the same for forwarding
      packets. This is a generally good policy, using the :INPUT DROP and
      :FORWARD DROP defaults. Exceptions can be created later for specific
      ports, addresses, and so on. Meanwhile, for purposes of ensuring you
      don’t forget to allow something that your user can do, you should
      probably use an :OUTPUT ACCEPT default. A more secure way to configure
      it is to use :OUTPUT DROP with exceptions defined for behaviors you
      want to allow, but that can get prohibitively difficult with end-user
      client systems that must perform a wide variety of networking functions.

      -A INPUT -i lo -j ACCEPT allows your system to accept all
      incoming requests that originate at your own network adapter. This is
      useful for things like testing your network configuration by pinging
      yourself, getting local system mail delivered (like when your computer
      wants to tell you something broke), and so on.

      -A INPUT -m state –state ESTABLISHED -j ACCEPT takes
      advantage of the stateful packet filtering capabilities of iptables to
      allow you to function quite flexibly with default DROP policies for
      incoming packets. This line basically states that any connection that
      is initiated by you will be allowed to continue, circumventing
      the DROP policies for incoming packets related to already established
      connections. Otherwise, you might be able to send data to another
      server, but never know whether it got there because when the server
      tried to reply your firewall would just drop the packets without
      comment.

      -A INPUT -p tcp -m tcp –dport 22 -j ACCEPT allows incoming
      SSH connections. You might want to change that 22 to another,
      nonstandard port number, and ensure that anyone that is supposed to
      have remote SSH access to the computer in question knows to use a
      different port when making such connections. For a system to which
      nobody should ever have remote access, you should remove this line from
      this file before using it. When you’re more well versed in iptables in
      the future, you might consider replacing this line with several lines
      that define what sources are valid for attempts to connect to the
      computer remotely, so that even on a nonstandard port no system cracker
      is going to be able to use SSH to get in from the wrong source (such as
      an incorrect IP address, et cetera), but that’s well beyond the scope
      of this article.

      -A INPUT -p tcp -m tcp –dport 631 -j ACCEPT allows for
      connectivity with network printers using CUPs. If that’s not something
      you have to worry about, delete this line. More complex, more secure
      iptables rules than this can be used to do the same thing, but this is
      a decent starting point.

      -A INPUT -p all -s 127.0.0.1 -d 127.0.0.1 -j ACCEPT is another “allow me to talk to myself” line.

      Based on the INPUT explanations I’ve just given, the OUTPUT lines
      should be fairly self-evident. A lot more can be done with iptables to
      ensure system security, but I’m only aiming to provide a starting point
      for managing iptables configuration. This very short, simple
      configuration is superior, security-wise, to every single
      distro-default iptables configuration I’ve ever laid eyes on, despite
      the fact they’re typically a hundred lines long, give or take — except
      when they’re empty or simply nonexistent.

      Have at it, and good luck.

    • #3130143

      What the heck are “iptables”?

      by apotheon ·

      In reply to bITs and blogs

      Every Linux user eventually runs across the concept of iptables. Early on, though, (s)he/it might not know what “iptables” means, or even have heard the term. I just wrote something quick about iptables management in this blog without ever explaining what it is. Bad me. I’ll rectify that situation now.

      I’m going to assume you’ve heard of a “firewall” and have some vague notion of what that means in relation to computers and networking. As a quick summation, though, a firewall basically provides and enforces rules for allowing or denying network access on specific ports, from or to specific networked computers, and so on. Most Windows users, when they think of a “firewall”, think of Windows Firewall, ZoneAlarm, or what they tend to refer to as a “hardware firewall”, such as the many rinky-dink router appliances that can be purchased from Circuit City, Best Buy, and so on.

      Windows Firewall and ZoneAlarm (along with a slew of others), sometimes called “software firewalls”, are in fact little different in concept from the so-called “hardware firewalls”. In point of fact, the biggest difference between the concepts is the security that each provides. Because the “software firewall” is on the local system, it provides a reduced security potential: by the time unauthorized traffic touches the “software firewall”, it has already touched the system you’re trying to protect. That doesn’t mean you shouldn’t use such a thing, however. It’s an extra layer of security, and used properly it can enhance the overall security of your network. You should never, ever consider it a substitute for a separate (“hardware”) firewall, however.

      Applications such as the Windows Firewall and ZoneAlarm are really pretty crappy, as firewalls go. Even ZoneAlarm Pro (much better than the free version, and staggeringly, incredibly, frighteningly better than the deeply flawed Windows Firewall on Windows XP systems) is not great as a firewall. Norton Firewall is better in some ways than the above, in that it is capable of providing better security, and worse in others, in that it is difficult to configure, hides much of what it’s doing even worse than ZoneAlarm (and on a par with Windows Firewall), and in general has the potential to seriously screw things up. Ultimately, the major problem with all of these popular “software firewalls” on Windows systems is that they do not operate at a low enough level to provide really significant security. There are a couple of firewall applications for Windows that do provide a more fundamental firewalling capability, making use of Windows kernel socket APIs, but the Windows OS design and driver APIs provides for potential “leakage” that even these Windows socket-layer firewalls (such as the iSafer Winsock Firewall: http://winsockfirewall.sourceforge.net) can be worked around by a clever security cracker, depending on the sort of hardware you’re using for network connectivity, what drivers you’re using, and so on.

      Ultimately, the problem with these Windows-based firewalls is that they’re software that sits on top of the OS trying to get the OS to relinquish control of network packet control earlier than it really “wants” to so that the traffic can be filtered effectively.

      Free unices tend to have a much better packet filtering model. Linux, for instance, has the netfilter project, which works on kernel-integrated network traffic filtering. The management system for that, which handles filtering rules for netfilter to apply and enforce, is called “iptables”. The OpenBSD analog to iptables is called “pf”. It happens that iptables (as well as pf) works extremely well as a firewalling system. While I haven’t done an exhaustive survey, I’d say that probably at least half of the little “hardware firewalls” you run across in retail electronics outlets are in fact running a stripped-down embedded Linux kernel with netfilter, some running iptables and some running some wacky hybrid thing that replaces iptables just to make everything work “differently” somehow — probably to frustrate the efforts of people who would like to have more hands-on control of how the router/firewall appliance is working behind the scenes. In any case, if you’ve used a store-bought router/firewall appliance, there’s a reasonable chance you’ve used something running iptables for firewalling, even if you’ve never installed Linux on anything.

      Because of the open, modular design of Linux (and other free unices, for that matter), kernel-integrated network packet filtering has become very easily implemented and improved over the years. This allows for a very close marriage of the firewalling capability of such OSes with the network interface itself, providing a basically impenetrable security model, in theory. The security you can get from this depends on your ability to effectively define firewall rules and the flexibility and functionality of the filtering rules management system (in this case, iptables), however.

      There was a predecessor to iptables called ipchains. From what I’ve seen thus far, it looks like ipchains differed from iptables mostly in that it was a little more of a pain in the butt to work with, and in that it was stateless. What iptables being “stateful” means is that it can actually apply firewall rules based on the current state of network traffic: rules can exist that depend upon the amount of traffic you’re receiving on a specific port, for instance, rather than simply blocking or opening that port. This makes iptables much, much more useful for ensuring system security than ipchains.

      It’s always a good idea to have a decent iptables configuration in place on a given machine, regardless of outside firewalls you may have. Each individual machine, depending on its uses, might have different packet filtering needs. As a result of this, the external firewall device should have a configuration that permits everything the most permissive of your local systems is going to need to do: each individual machine, then, can deny as much of that as it can get away with.

      When you first set up a Linux machine, it should have an iptables configuration of some sort already in place. That might consist, in some distributions of Linux, of a pages-long set of complex rules designed to allow you to make use of hundreds of applications and services that you probably won’t ever touch: this is the approach that “kitchen sink” distributions like Mandrake have taken over the years. Very minimal systems have a tendency to just set everything to “allow”, giving an extremely simple, but essentially pointless, configuration with the assumption that the user will do something to change that.

      There are user interface tools that can be used to manage your system’s security profile in a higher-level, more abstracted, more “user friendly” manner than hacking iptables configuration yourself. These include both CLI-capable tools like Bastille, and GUI tools with pretty colors and clicky buttons like KDE’s Guarddog. There are even Linux distributions whose whole purpose is to provide a GUI front-end to iptables with a fair amount of configurability, reasonably sane default configurations, and integration of the configuration interface with that for routing services and other functionality commonly implemented on a network firewall box. I find that it tends to provide a greater understanding of, and thus better ability to manage, network and system security to work with iptables directly instead. Thus, my devotion of an ITLOG post to the subject of simple iptables management.

      If you’ve wondered what iptables is, and what you should do with it, I hope this has provided your answers.

      • #3125301

        What the heck are

        by nre ·

        In reply to What the heck are “iptables”?

        Very interesting article.  I did learn more about IP tables than I knew before.  In the strictly windows environment of our university with a user community that is not computer literate, what software firewall would you recommend?  We would need one that does a good job, and is easy to configure.  We struggle with educating our users on their importance and  many will not use them as they are too much of a bother or to hard.

        Thank you for you suggestions,

        Nancy

      • #3135524

        What the heck are

        by apotheon ·

        In reply to What the heck are “iptables”?

        Give Winsock Firewall a try. I posted a URL for it in the text of the article itself. If that doesn’t work out for you, take a whack at ZoneAlarm. It’s not perfect, but it’ll do, generally. If you set it up correctly, you can “train” it to allow traffic only for the applications you want to have Internet access. That’s not the most secure way to configure a stateful firewall, but it’s what ZoneAlarm allows, and it’s one of the things that makes ZoneAlarm better than stuff like the Windows Firewall.

        There may be others in the TR community who know more about what Windows-based software firewalls are available than I do. If you can find someone who has more technical information to offer on the subject than me, by all means listen to what they have to say on the subject.

      • #3083449

        What the heck are

        by f.groen ·

        In reply to What the heck are “iptables”?

        Okay, if one is running a unix based OS it’s clear that iptables can and have to be considered. But what you are actually stating is, that everyone should discontinue the use of any Windows based OS and migrate to Unix/Linux in order to have a secured system. However no matter what distribution you take : for the average home user it’s still a pain to get things  working as his/her formal WIndows OS did (even with all the limitations they encountered in working with windows). And it’s typically the user at home who is constantly in the line of the hacker fire.

        Greetings,

        Frank

         

    • #3081131

      Elegance

      by apotheon ·

      In reply to bITs and blogs

      Soapbox Graphic

      elegant (adj.): characterized by a lack of the gratuitous

      There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.

      – C.A.R. Hoare

      There’s a long tradition of referring to the elegance of a system. In the IT industry, this tends most commonly to be applied to source code, and it is generally accepted that the more elegant it is, the better. Elegance is differentiated from other superficially good things in a number of ways, including the common assumption that elegance goes deeper, while these other “good” things are only good within certain constraints.

      For instance, “clever” source code is good for its cleverness, but can be bad for maintainability — mostly because clever code is often difficult to understand. Cleverness also falls short because of a simple principle first articulated in an email signature of Brian Kernighan’s: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

      Another example is object oriented programming. Probably ninety-some percent of the competent programmers out there are thoroughly sold on the concept that OOP is the holy grail of programming techniques, and any further advances in programming techniques are just fine-tuning OOP techniques. I think this common perception is an outgrowth of twenty years of corporate influence on the evolution of programming, where large numbers of mediocre programmers end up handling the same codebase over the course of its lifespan. Two orthogonal systems of minimizing the damage a mediocre programmer can do to a project have been introduced to programming practice with a great deal of success: version control and object oriented programming.

      Object oriented programming isn’t the holy grail, though. It doesn’t in any way aid with the creation of truly excellent code. It simply aids in the avoidance of truly atrocious code, and even then only in an aggregate view of a complex project. When you start drilling down to the individual bits and pieces of a complex software project that has passed through the hands of a great many mediocre programmers, you’ll start seeing atrocious bits of code that limp along just well enough to keep working, as long as they’re strictly encapsulated and separated from the rest of the codebase (except for its API, of course). Encapsulation and modularity are good things in general, but they aren’t immutable axioms of goodness.

      One of the trade-offs with object oriented programming is that it encourages repetitive action and tedious effort in writing code. Have a look at some “enterprise” class Java source code some time and start paying attention to how much of it is actual program logic, as contrasted with how much of it is scaffolding imposed on the source by Java’s object-orientedness. In fact, if you really want to understand what’s going on with object oriented programming and other superficially “good” things in programming, I recommend you start comparing how easily one can produce short, elegant code in various languages, and pay attention to why one language produces a shorter, more elegant solution than another. I think you’ll find some surprising facts come to light.

      Of course, it’s true that brevity is not strictly synonymous with elegance. In fact, Perl golf — the practice of passing code around between programmers to see how short a given algorithm can be made — is a thoroughly gratuitous sport, concerned little, if at all, with elegance. In pursuing elegance, it is more important to be concise than brief, though. In a general sense, however, brevity of code does account for a decent quick and dirty measure of the potential elegance that can be eked out of a programming language, with length measured in number of distinct elements rather than the number of bytes of code: don’t confuse the number of keystrokes in a variable assignment with the syntactic elements required to accomplish a variable assignment. Armed with that definition of the term “shorter”, you should be able to make some meaningful comparisons of the elegance possible when working with various programming languages.

      In particular, you might notice that without using any object oriented techniques, Common Lisp and Perl produce much shorter examples of certain algorithms than Java and C++. Even if you cut out all the object oriented scaffolding in the Java and C++ examples, you still typically end up with a lot more code, as measured in discrete syntactic elements. Things like lexical variables and anonymous blocks (or, roughly equivalently, lambdas) tend to make for much simpler, more elegant solutions than imposing rigorous OOP structure. In fact, the more you examine the matter and make such comparisons, the more I suspect you’ll come to realize that OOP itself has nothing to do with producing elegance, and everything to do with limiting opportunity for mediocre programmers to produce cruft and introduce bugs.

      Elegance is about the gratuitous — or, rather, avoiding the gratuitous. It’s true that sometimes people disagree about which of two or more things is the “most elegant”, but this arises from underlying assumptions rather than any true subjectivity of the principle. Each of us has a set of operating assumptions, some greater (meaning: bloated and cumbersome) than others. Where something conforms to one’s expectations and assumptions, it is seen to not lack in elegance in that manner. Someone that does not have the same underlying assumptions might see the same thing as atrociously inelegant, but having a different set of assumptions would overlook similarly subjective quirks in another example that are, to the first person, inelegant.

      Specifically, someone with assumptions derived from long indoctrination by the OOP crowd might overlook all the scaffolding imposed by a language like Java for using OOP techniques, and see something that takes up 50 lines of program logic and 150 lines of OOP scaffolding as elegant. Meanwhile, a long-time Perl hacker might take one look at that and see it as the inelegant monstrosity it is. This Perl hacker, on the other hand, might write 30 lines of procedural code to perform the same task, and the Java programmer might look at it and wonder why it isn’t more modular, simplifying the program logic itself and making the whole thing more scalable for future code maintenance, thus rightly seeing the inelegance of the procedural hack the Perl programmer threw together.

      This doesn’t make elegance subjective: it only makes our individual perspectives on it subjective. If we can discard the assumptions of both the Java developer and the Perl hacker, and recognize the underlying principles of source code design that contributed elegance to each solution, we could probably turn the same set of solutions into something much, much simpler and more elegant, in terms of its program logic and cruft-weight. Unfortunately, languages like Java are not really suited to that sort of optimization for elegance: you really need a language more dynamic than that, such as Perl, Python, Ruby, or basically any Lisp. The more a language lets you define the language you’re using on the fly, the more likely it is to allow an excellent programmer to produce elegance, which should really be the end goal of writing code, generally speaking: elegant solutions.

      All really useful principles of programming, or systems design in general, seem to be practical, case-specific extrapolations from my fundamental definition of elegance. In short, they all seem to boil down to this one instruction: If it’s gratuitous, find a way to get rid of it. For example, consider the Pragmatic Programmers’ DRY principle — Don’t Repeat Yourself. In short, it is better to avoid repetitions of data and program logic in your code. Any time you find yourself having to repeat or rephrase something in your code, reinject data into your data model from wherever you have it stored, and so on, you’re screwing up. Ask yourself whether DRY is really useful by reducing repetition in and of itself, or by reducing gratuitous repetition. After all, recursion and looping behavior might also fall within the definition of “repeat yourself”, but I don’t think anyone (sane) would ever recommend eliminating all loops and recursion from programs. Sometimes, you just need your program to perform a given set of instructions on a long list of slightly different items. Often, loops make source code more elegant.

      This ties in very nicely with the more general, more philosophical concept of aesthetics, and that provides some understanding of why it is possible to look at source code and, without yet consciously knowing what’s wrong with it, have an immediate intuitive reaction to its inelegance. That’s not to say that something can’t be aesthetically pleasing without being perfect in its elegance, of course. Instead, the ability to recognize some characteristics of elegance is what leads to an aesthetically pleasing perception of the subject.

      Elegance is not about aesthetics. Rather, aesthetics is about elegance. Ostentation lacks aesthetic appeal, and is inelegant, because it’s gratuitous. Simplicity is often not elegant either: if something is too simple, it is nonfunctional, and fails to achieve its aim. What makes something beautiful is not strictly simplicity, symmetry, complexity, or any other such characteristic. Instead, what makes something beautiful is that its characteristics are all appropriate to its purpose. Complexity can be exceedingly beautiful, as long as it’s not gratuitous complexity, which is just chaos and confusion. Likewise, simplicity can be exceedingly beautiful, but if you make something gratuitously simple, you get dullness rather than beauty. Gratuitous simplicity is merely boring.

      When you’re writing source code, make it elegant. When you’ve written something, go back and look it over, and for each and every thing you’ve done you should take a moment to question whether it’s really necessary, or even functionally desirable, to have it in there. You probably won’t get it perfect, but you can at least make it awfully pretty, which is a good thing as long as you do so by addressing elegance rather than trying to disguise the inelegance of your code by conforming to formatting conventions without rethinking your program logic at all. Refactoring, in the end, is really just about looking with fresh eyes for any opportunities to introduce elegance by removing the gratuitous.

      If you’re unlucky, you may discover that making your code significantly more elegant might require rewriting it in a different language.

      The following definitions are from Princeton WordNet:

      elegant (adj.): of seemingly effortless beauty in form or proportion

      gratuitous (adj.): unnecessary and unwarranted

      Want to see who’s next On the Soapbox? Find out in the Blog Roundup newsletter. Use this link to automatically subscribe and have it delivered directly to your Inbox every Wednesday.

      Subscribe Automatically

      • #3094429

        Elegance

        by sterling “chip” camden ·

        In reply to Elegance

        Very good.  It seems to me that the holy grail of OOP should be the elegant simplification of complex systems by creating abstractions that mirror our thinking and speaking about the components of the problem at hand.  Because everything can be both generalized and dissected to no end, it’s important to create those abstractions at the right levels.  I agree with you that OOP languages require a lot of extra verbiage just to describe those abstractions.  Your discussion could be used as a sending-off point for an exploration of how languages might be structured so that they can describe complex systems with multiple layers of abstraction in more concise and elegant terms.

      • #3094403

        Elegance

        by apotheon ·

        In reply to Elegance

        Indeed. OOP could be bent to the task of aiding elegance, to a certain degree, and there are a couple languages out there that do a halfway decent job of that: Smalltalk, Ruby, and Objective C come to mind. Under the pervasive influence of the corporate software development paradigm of the last two decades, however, using encapsulation and modularity to limit the power of the individual programmer on software projects has emerged as an apparently more important criteria for language choice and programming methodology.

        A lot of that is going out the door, with the advent of agile programming techniques, the growing popularity of more dynamic languages such as Ruby, and the growing popularity of new programming methodology classics like The Pragmatic Programmer. Emphasis on small, fast, well-tuned programmer teams with some clue on what elegant code looks like is growing. I don’t foresee this sounding a death knell for Java and .NET by any means, but it will likely cut into their potential market share if this trend continues.

        For that, I’d be grateful.

      • #3096206

        Elegance

        by whollyfool ·

        In reply to Elegance

        A very well-written and thought-provoking article.  I will send this to my friends.  Thanks for writing it.

        Wendy

      • #3096102

        Elegance

        by wayne m. ·

        In reply to Elegance

        “I don’t think anyone (sane) would ever recommend eliminating all loops and recursion from programs.”

        As an aside from a possibly insane OO (not necessarily OOP) proponent, I am wish all loops and recursion were eliminated from programs, in the sense that structured programming eliminated GO TOs from programs.

        Applying an operation to all members should be a basic capability of a set or collection class.  There is no reason that the implementation, which may very well use a loop or recursion, should be exposed to the programmer and have the programmer have to explicitly requested a repeated operation and its starting and ending points.  As a programmer, I should be able to request an operation be performed upon a set or collection and not be concerned how each member is operated upon.

        The for loop and its relatives should be eliminated as programming primitives and be replaced with operations that are inherently applied across a collection of items.  This capability is feasible with current compiler technologies and could be added to many popular languages, eliminating one more source of potential errors from code.

         

      • #3095498

        Elegance

        by kovachevg ·

        In reply to Elegance

        Excellent post. It brings to light the fact that a skilled programmer choses the language based on the program he has to write. For example, if it is a Unix utility – use C or shell scripts – if it’s a web based small system, then use PHP or some other suitable scripting language like Perl. Java is suitable for enterprise systems because one needs to keep things modular, flexible, and scalable. These characteristcs typically warrant the complexity Java provides and are in no way gratuitous.

        Sometimes it pays to step back and learn a new technology or language – like Snow Ball that replaces 700 of C code with just 30 lines. Talk about elegance. The author is absolutely right about the innovative ways we need to look for when writing software but we also have to consider what Einstein said about problem solving: “You cannot solve a problem in the context it has occurred.” And so, we need to turn to fundamentally new ways of writing software. There is an interesting example in one technology I recently had to use on a project – Hybernate. Hybernate is an automatic object-to-relational mapping technology. It “writes” SQL on the fly based on XML definitions that decribe the relationships between ojects. So in this context, the complexity is still there but we no longer see it. We only deal with objects, not with relations, tables, and queries. In my humble opinion, that’s elegance.

      • #3095499

        Elegance

        by kovachevg ·

        In reply to Elegance

        Excellent post. It brings to light the fact that a skilled programmer choses the language based on the program he has to write. For example, if it is a Unix utility – use C or shell scripts – if it’s a web based small system, then use PHP or some other suitable scripting language like Perl. Java is suitable for enterprise systems because one needs to keep things modular, flexible, and scalable. These characteristcs typically warrant the complexity Java provides and are in no way gratuitous.

         Sometimes it pays to step back and learn a new technology or language

         

      • #3095496

        Elegance

        by kovachevg ·

        In reply to Elegance

        Excellent post. It brings to light the fact that a skilled programmer choses the language based on the program he has to write. For example, if it is a Unix utility – use C or shell scripts – if it’s a web based small system, then use PHP or some other suitable scripting language like Perl. Java is suitable for enterprise systems because one needs to keep things modular, flexible, and scalable. These characteristcs typically warrant the complexity Java provides and are in no way gratuitous.

        Sometimes it pays to step back and learn a new technology or language, for example Snow Ball. I bet not many of your have heard about it. I wouldn’t know about its existence either if it weren’t for my ex-boss who programmed on IBM mainframes at Brown. He told me how 30 lines in Snow Ball replaced 700 lines in C – defintely elegant, don’t you think?

         

      • #3095497

        Elegance

        by kovachevg ·

        In reply to Elegance

        Excellent post. It brings to light the fact that a skilled programmer choses the language based on the program he has to write. For example, if it is a Unix utility – use C or shell scripts – if it’s a web based small system, then use PHP or some other suitable scripting language like Perl. Java is suitable for enterprise systems because one needs to keep things modular, flexible, and scalable. These characteristcs typically warrant the complexity Java provides and are in no way gratuitous.

         Sometimes it pays to step back and learn a new technology or language

         

      • #3095477

        Elegance

        by hippikon ·

        In reply to Elegance

        This is rank hypocrisy!! The author has shown an excessive fondness for the phrase “gratuitous” and yet most of this article, is, (yeah you guessed it) gratuitous. For example, the long monologue on elegance being subjective

        Quote – This doesn’t make elegance subjective etc etc etc……..”

        can be replaced with – Elegance means different things to different people…This by iteslf negates the rest of the article. If I think Java is great, does that make me cruel corporation/mediocre programmer/what? And there is SO much pseudo-intellectualism here that its irritating and patronizing and BOOORING!!!!!!!!!!

        All right I’m not going to deny that the author makes many good points. But also fails to convince on many points, also showing partisanship and lack of an open mind.

        If the point is objectivity, lets be completely objective here… A “Java BAAD” is simply not an argument. Java is obviously useful in at least SOME programmatic situations, come on!!!!!!!!!! Nothing is all bad except Sauron…

        ALL those people around the world using it cannot be fools or corporations or worshippers of mediocrity!!!!!!….  I am left confused about whether this is mere corporate propaganda or some kind of lament by a programmer with a shoulder-chip about Java…..

      • #3095346

        Elegance

        by rjacobsen ·

        In reply to Elegance

        A lot of thought has gone into this posting; it is successful in that it makes us look at the value of OOP and other programming approaches.

        Although I agree that the OOP approach can keep some programmers out of trouble, I recall one project at my company that was being managed and coded by a guy who thought the “Visitor” design pattern was the be-all and end-all of OOP – the entire code base was based on this pattern; it was more ponderous than Scrooge’s chain, was never fully installed successfully and died a horrible, slow, agonizing death.

        It doesn’t matter which languages or paradigms we use to build software; the persuit of “elegance” over “creating solid, maintainable code” can really throw a wrench into the gears of a project where implementation was fairly straght forward and where success was quite attainable.

        -RJ

         

      • #3095283

        Elegance

        by grantwparks ·

        In reply to Elegance

        Object oriented thinking is mechanism by which one’s program can achieve higher cohesion/lower coupling.  As I always say, there’s been nothing new written about programming since Yourdon and De Marco.  Most every concept touted as new was in their book.  My colleagues and I were doing oo programming in assembler and C on MVS mainframes before the term was in use.  We didn’t have inheritance, although by keeping pointers to other objects inside of our objects, we still could encapsulate the methods for both.  And inheritance seems to be one of the stickier issues in OO programming and can often make things worse.  My feeling is that object-oriented programming is generally great – yet I pretty much loathe formal OO languages.  Much of the code I write during development is going to be thrown away – it is experimental as I see how the whole system begins to gel.  When I decide I want to do something simple like sort an array so that I can easily examine it for dups, and then Java (or C# or whatever) makes me now change things so that array implements comparable, etc., etc. so I can do this simple thing, even though I know I don’t need the array sorted in the final product.  It kills me.  AND – this is crucial – I don’t usually know as I’m writing and refactoring, where certain functionality belongs and I move functions around, which is a much bigger headache in formal OO languages.  The formal class definitions in those languages usually guide mediocre programmers into mapping out all their objects first in a design – which is what all the OO tutorials guide you to do.  Works OK in the fictitious world, but not in the real world.  And then a new classes have to be continually defined because the original design isn’t adequate.  I’ve worked in at least a dozen languages – serious work – and you can do oo programming in any language that allows the passing of pointers (specifically function pointers).  Lately I’ve been doing a lot of Firefox/Mozilla programming and it has really moved me into OO Javascript programming.  And I love it.  I can jam some quick stuff in to see how it works and then still use more formal classes once I see where functionality belongs.

        And to the point – just because a person uses Java, or C# doesn’t make them an OO programmer.  A .Net devotee I knew (who’d been using it since it early beta) touts himself as an advanced OO programmer/analyst, but when I asked him the best way to implement an XML driven tree control in C#, whether I should derive my control from .Net or create my own with a reference to the .Net one, he glazed over and said “I don’t know, I’ve never done anything like that”.  C’mon, leveraging what’s already been done is oo thinking.  If all you do is use the class libraries provided, you’re not doing oo.

        Long story story – disciplined programmers don’t need OO to do great work.  The undisciplined ones should be let go to work in code factories.  (DISCLAIMER – I am not saying that all programmers who use and love OO languages are are mediocre or undisciplined.)

      • #3095281

        Elegance

        by slackola ·

        In reply to Elegance

        I also feel that the author does not give Java a chance in this article.  Because Java is OOP, that means that some development shops can go ridiculously overboard using crazy frameworks that create lots of gratuitous scaffolding.  However, for most applications, a little bit of black box abstraction can go a long way towards making the code easier to create and to debug.  Usually, multiple levels of inheritance are unnecessary and confusing, and overloading is typically only useful for utility functions.  But if you use common sense to separate parts of your app into packages and classes that make sense and act like black boxes, then your app becomes significantly more managable and easier to debug.

        Furthermore, enhancements are easier to code to a Java app that has logically been divided into classes and packages.  You can simply change the internal workings of the class for the new behavior – and if it affects the public API for the class, the compiler will show you where all the other changes need to be made because Java is strongly typed.  Any language that is OO and strongly typed has that ability to be managable, easily debugged and easily enhanced.

        All the article’s talk about encapsulation helping to protect the code from mediocre programmers is true, but encapsulation also helps protect the code from great programmers who are not immune from making the occasional blunder!

      • #3094805

        Elegance

        by apotheon ·

        In reply to Elegance

        hippikon:

        For example, the long monologue on elegance being subjective

        Quote – This doesn’t make elegance subjective etc etc etc……..”

        can be replaced with – Elegance means different things to different people.

        Obviously, you didn’t get the point of that paragraph. Perhaps you should read it a few more times and let it sink in. In fact, “elegance means different things to different people” provides in some ways an impression that is directly opposite of what I was saying. That is not the point of the paragraph: it is merely a caveat.

        If I think Java is great, does that make me cruel corporation/mediocre programmer/what?
        Why would you think that? You’re getting cause and effect backwards. I never made that statement in any way, nor did I even imply it particularly. It’s true that there are more mediocre Java programmers than great Java programmers, but that’s more because Java attracts and supports mediocre programmers — not because only mediocre programmers can use it. Besides, there are more mediocre programmers than excellent programmers everywhere, no matter what the language is. I think you’re too quick to take insult. Maybe you should have a look at cuppa joe: Java In Context to get a clearer view of what I think of Java, and stop jumping to (inaccurate) conclusions.

        Nothing is all bad except Sauron.
        This just makes me want to create a programming language called Sauron.

        Enough of you. I’m going to comment on something else now.

        rjacobsen:

        the persuit of ‘elegance’ over ‘creating solid, maintainable code’
        The guy obsessed with one design pattern in your little anecdote may have been pursuing elegance, but it looks like he was doing so by running in completely the wrong direction. Pursuit doesn’t necessarily imply catching what you pursue, and it sounds to me like the major problem was that he wasn’t achieving elegance at all. Elegant code is solid, maintainable code, among other things.

      • #3094767

        Elegance

        by geoff.arnold ·

        In reply to Elegance

        “As an aside from a possibly insane OO (not necessarily OOP) proponent, I am wish all loops and recursion were eliminated from programs, in the sense that structured programming eliminated GO TOs from programs.”?function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        } class=”khtml-block-placeholder”>function (match)
        {
        return match.toLowerCase();
        }>function (match)
        {
        return match.toLowerCase();
        }>So much for most numerical algorithms, not to mention searches. Not all iteration is collection- or set-based, grasshopper.function (match)
        {
        return match.toLowerCase();
        }>

      • #3094758

        Elegance

        by geoff.arnold ·

        In reply to Elegance

        Quoth Wayne M.:?As an aside from a possibly insane OO (not necessarily OOP) proponent, I am wish all loops and recursion were eliminated from programs, in the sense that structured programming eliminated GO TOs from programs.” Oh really? So much for most numerical algorithms, not to mention searches. Not all iteration is collection- or set-based, grasshopper.

      • #3095900

        Elegance

        by godaves ·

        In reply to Elegance

         

      • #3095896

        Elegance

        by sjtech ·

        In reply to Elegance

        Unfortunately, the author has no real clue on this subject but does
        like to rant. To relate elegance simply to gratuitous or non-gratuitous
        code is absurd. To assume OOP is inelegant simply because it requires
        “scaffolding” clearly shows the lack of understanding in true Software
        Engineering. OOP if done in its true sense is by its very nature
        elegant. I write a great deal of code in PERL and JAVA. It is much
        easier to write inelegant code in PERL because of its nature while JAVA
        lends itself to more elegant code. But I have certainly written elegant
        code in both and the reverse as well.

        Also, I could even say C has too much scaffolding if I compare it to
        Assembler. I have written highly optimized routines in Assember for
        performance reasons. Any other language would be inelegant no matter
        how well it was written simply because of the overhead.

        So, in the end, the language does not matter.

        Elegance in Software not only relates to the look of the code, but the
        functionality it creates as well as to the original design of the
        system. I could easily write elegant looking code that does nothing, or
        elegant functions that could not be maintained, or elegant looking code
        that runs elegantly written functions but where the original design
        makes no sense. It is all these components which make or break elegance.

        Further, in very complex systems, maintainability is very large
        requirement. As such, non-gratiutous code may in fact be a very big
        help in this regard. This does not make the system inelegant, on the
        contrary it makes it more elegant, especially to the maintainer.

      • #3095889

        Elegance

        by hippikon ·

        In reply to Elegance

        Hey I’m only making one point here….ELEGANCE REALLY IS SUBJECTIVE shocking as the author may find it!!!!

        And I swear I cant read the corresponding paragraph again….Sorry.

        One more thing – Brevity is the soul of wit…i.e No brevity, no wit :D.

        The author is a master obfuscator but Im still not able to understand the motivation…Anyway its not mine to judge…Im sure you have good reason and circumstances for your article…No hard feelings from my side :D!

        at lease we got to see a good debate going

         

      • #3095856

        Elegance

        by hippikon ·

        In reply to Elegance

        Hey I’m only making one point here….ELEGANCE REALLY IS SUBJECTIVE shocking as the author may find it!!!!

        And I swear I cant read the corresponding paragraph again….Sorry.

        One more thing – Brevity is the soul of wit…i.e No brevity, no wit :D.

        The author is a master obfuscator but Im still not able to understand the motivation…Anyway its not mine to judge…Im sure you have good reason and circumstances for your article…No hard feelings from my side :D!

        at lease we got to see a good debate going

         

      • #3095818

        Elegance

        by apotheon ·

        In reply to Elegance

        SJennett:

        The author of what? I would tend to guess that you would mean the original Soapbox article, Elegance, except that the points against which you’re arguing weren’t actually made for the most part. For one thing, you seem to be taking an entirely too narrow view of where the term “gratuitous” can be applied. For another, you simply seem to be conjuring opinions out of thin air and attributing them to me. Finally, of course, you’re simply wrong about some things — such as your statement to the effect that OOP code is inherently elegant. If you really believe that’s the case, you’ve either never seen a lot of the spaghetti OOP I’ve seen (which would probably mean you’re doing all your programming in an ivory tower somewhere, removed from the real world of programming) or never actually figured out how to write and/or recognize good code.

        As for it being easier to write inelegant code in Perl than in Java: yes, that’s true. It’s also true that it’s easier to write elegant code in Perl than in Java. It’s a trade-off. In exchange for the freedom to achieve greater elegance, you get the danger of sawing off your own foot. That’s one of the reasons Perl has been referred to as the Swiss Army Chainsaw.

        Finally, scaffolding is essentially cruft. Any time you have to repeat complex effort tediously, you’re generating cruft. I don’t yet know of a way to make object oriented code without at least a little touch of scaffolding, such as definition statements for classes and object instantiation, without making a language so unwieldy in implementation that it’s essentially not worth using. That doesn’t mean that you should have to write twenty lines of scaffolding for every thirty of program logic, however, which is just beyond the pale. If you can honestly look at nearly half your source code being filled with scaffolding and call it “elegant”, you need to recheck your dictionary.

      • #3095763

        Elegance

        by rjacobsen ·

        In reply to Elegance

        Right you are! This is the point I was (unsuccesfully) trying to make…. Pursuit of elegenace is not always successful, particularly by those who do not understand it and/or do not understand how to use OOP principles.

        I should have said: “The misguided pursuit of elegance can derail a project that could be easily implemented and done so with a great potential for success.”

        Thanks for pointing out my misstep in communication.

         

      • #3095712

        Elegance

        by cletuspaul ·

        In reply to Elegance

        Much of the inelegant code that we see cluttering up software projects is caused by use of inappropriate tools or methodologies. Too often I have seen programmers forcing OOP on every project even when it is clearly not required, this results in much of the “scaffolding.”

        That complexity is sometimes a result of a misguided fixation on Java or C# , where everything must be a class,  even when an old fashioned “data and procedures” paradigm would be more suitable.  We should be clamouring for tools that give one a choice between “data / procedure” and OOP, thus enabling the programmer to use the most appropriate methodology for the various parts of the project.

        An aside:  just ask any programmer to write code in C, Java, and C#, to swap the values of two variables.  While this might appear insignificant, the differences in the coding highlights the case for choosing one’s tools very carefully.

      • #3078984

        Elegance

        by tony hopkinson ·

        In reply to Elegance

        Well I’m a big fan of OO in Business IT. Some of the implementations I don’t care for. Never seen one that couldn’t be improved either. To me coding for corporate IT, the goal is not elegance. Elegance, however you care to define it is a luxury. The goal is maintainable. To be maintainable particularly by a group of developers with an often wide range of skills, it has to be modular and readable anything after that is gravy. Along with the obvious target of meeting the requirement with an acceptable level of performance, you have to be able to give that code to someone else to change it. If you can’t you wasted your employer’s money and should be treated accordingly. OO however is not a silver bullet, it’s not always the best solution even in high level business IT. OO will never be the most optimal solution in the final deliverable, it does have a considerable overhead. In most situtations that overhead will be an acceptable cost weighed against it’s other potential benefits.

        As a footnote the three most badly designed applications I’ve had the misfortune to work on were all OO ones, so as developers don’t rely on that spangly new star on your wand, learn how to wave it properly.

      • #3079710

        Elegance

        by cletuspaul ·

        In reply to Elegance

        This is not meant to be another “I Love XYZ best”  song, but since we are talking about the need for scalable, maintainable, and elegant code, perhaps I should mention the “D Programming Language”

        It is in 0.82 beta at the moment, and can be found at:
        http://www.digitalmars.com/d

        I append this overview found at the site:

        —– text begins —

        What is D?

        D is a general purpose systems and applications programming language. It is a higher level language than C++, but retains the ability to write high performance code and interface directly with the operating system API’s and with hardware. D is well suited to writing medium to large scale million line programs with teams of developers. D is easy to learn, provides many capabilities to aid the programmer, and is well suited to aggressive compiler optimization technology.

        D is not a scripting language, nor an interpreted language. It doesn’t come with a VM, a religion, or an overriding philosophy. It’s a practical language for practical programmers who need to get the job done quickly, reliably, and leave behind maintainable, easy to understand code.

        D is the culmination of decades of experience implementing compilers for many diverse languages, and attempting to construct large projects using those languages. D draws inspiration from those other languages (most especially C++) and tempers it with experience and real world practicality.

        Major goals of D

        • Reduce software development costs by at least 10% by adding in proven productivity enhancing features and by adjusting language features so that common, time-consuming bugs are eliminated from the start.
        • Make it easier to write code that is portable from compiler to compiler, machine to machine, and operating system to operating system.
        • Support multi-paradigm programming, i.e. at a minimum support imperative, structured, object oriented, and generic programming paradigms.
        • Have a short learning curve for programmers comfortable with programming in C or C++.
        • Provide low level bare metal access as required.
        • Make D substantially easier to implement a compiler for than C++.
        • Be compatible with the local C application binary interface.
        • Have a context-free grammar.
        • Easily support writing internationalized applications.
        • Incorporate Contract Programming and unit testing methodology.
        • Be able to build lightweight, standalone programs.
        • Reduce the costs of creating documentation.

        Who is D for?

        • Programmers who routinely use lint or similar code analysis tools to eliminate bugs before the code is even compiled.
        • People who compile with maximum warning levels turned on and who instruct the compiler to treat warnings as errors.
        • Programming managers who are forced to rely on programming style guidelines to avoid common C bugs.
        • Those who decide the promise of C++ object oriented programming is not fulfilled due to the complexity of it.
        • Programmers who enjoy the expressive power of C++ but are frustrated by the need to expend much effort explicitly managing memory and finding pointer bugs.
        • Projects that need built-in testing and verification.
        • Teams who write apps with a million lines of code in it.
        • Programmers who think the language should provide enough features to obviate the continual necessity to manipulate pointers directly.
        • Numerical programmers. D has many features to directly support features needed by numerics programmers, like direct support for the complex data type and defined behavior for NaN’s and infinities. (These are added in the new C99 standard, but not in C++.)
        • D’s lexical analyzer and parser are totally independent of each other and of the semantic analyzer. This means it is easy to write simple tools to manipulate D source perfectly without having to build a full compiler. It also means that source code can be transmitted in tokenized form for specialized applications.

        — end text —

      • #3252151

        Elegance

        by sterling “chip” camden ·

        In reply to Elegance

        Can hippikon please avoid the gratuitous exclamation points?

      • #3252135

        Elegance

        by sterling “chip” camden ·

        In reply to Elegance

        Trackback: More thoughts on your post here: http://www.chipstips.com/microblog/index.php/post/61/

    • #3081004

      The Sony Fiasco That Wouldn’t Die

      by apotheon ·

      In reply to bITs and blogs

      Remember the Sony DRM fiasco? Of course you do. It’s still going on.

      Well, here’s more to make you giggle. By the way, don’t buy the new Coldplay album, entitled X & Y.

      If you’re still buying CDs from Sony and friends, you’re not paying attention. Where do they find these morons to run their companies, anyway?

      • #3095237

        The Sony Fiasco That Wouldn’t Die

        by jmgarvin ·

        In reply to The Sony Fiasco That Wouldn’t Die

        Sony will never learn.  It seems that we are all pirates and should be treated as such…You pirate!

    • #3094582

      What’s important in a computer?

      by apotheon ·

      In reply to bITs and blogs

      What are the important things about your computer setup? I’m fishing for answers. Some are obvious, of course, such as security, stability, and performance. What are the important features for you about the software you use? What’s important about vendor choice? What sort of applications do you tend to need? What sort of support and documentation do you want for your software? What’s important to you in a community of users? What are the things you wish were different about the computer configurations you’re currently using?

      In short: What’s already done right, and what needs to be done better?

      If at all possible, try to break it down into specifics. Just saying “Computers are too hard to work with!” doesn’t tell me anything, but something like “It should be easier to change my disk partitioning!” would be useful.

      • #3094538

        What’s important in a computer?

        by sterling “chip” camden ·

        In reply to What’s important in a computer?

        I would like to see virtualization becoming more a natural part of a server O/S.  Specifically, I’d like memory allocation to be dynamic between virtual systems, which of course means that the hosted O/S also has to be able to manage a variable amount of memory.  Since my goal in the next several years is to convert to living and working part time in maybe three or four places around the world, I would like to have this on a notebook that has enough horsepower to run two virtual servers and at least two virtual workstations simultaneously.  Right now, that means about 4GB of RAM, at least two hot processors, and at least 200GB of disk space — not too many notebooks out there now that sport that, plus I would need room for growth.  So, more horsepower to the notebooks, more integrated virtualization. Oh, and ubiquitous wireless high-speed secure Internet access world-wide.  TIA Santa.

      • #3096150

        What’s important in a computer?

        by lastchip ·

        In reply to What’s important in a computer?

        I would like an operating system that does not decide what I may, or may not do with it.

        I’m not saying I would wish to break the law, but if for example, I want to copy a DVD, then I should be able to do so, knowing the consequences of my actions, may lead to trouble further down the line.

        A computer is a machine, that I should be in command of all the time. It should not have any ability, to prevent me doing exactly as I wish. If I want to drive a car at 120mph, providing it’s capable, then I can do it, but, that may lead me into problems with law enforcement. The point I’m trying to make is, the car doesn’t stop me from doing as I wish. I just have to accept, there may be repercussions because of my behaviour.

        That said, in order to use this super-computer, well written documentation, not written by technicians or programmers, but by specialist writers that can describe to a beginner how to use their program effectively. To use a car analogy again, some years ago, Ford discovered that many of their owner/drivers could not understand their cars handbook. Why? Because it was written by engineers, rather than consumer writing specialists, that could transcribe technical information, into an easy to understand form. The result was (and still is) astonishing, in as much as you would have to be pretty thick, not to understand the current generation of car handbooks.

        For example, I’ve been trying to get Samba to work in a Windows network. The Linux (Debian) machine can see the Windows files, but whenever I try to see the Linux machine from the Windows, it asks for a password. Research indicates that it is due to a non-existent password (or file) and the extensive available Samba documentation says, add a password! Yes, but how? This is the sort of issue that needs to be addressed, and will never be while technicians are writing documentation, because they assume the reader has a certain level of knowledge. But if you’re learning something from scratch, your knowledge base is zero. This seems to be a concept that cannot be understood by a great many people.

      • #3096036

        What’s important in a computer?

        by apotheon ·

        In reply to What’s important in a computer?

        Regarding the Samba password issue: Try entering “man smbpasswd” at the shell to get information on how to use smbpasswd to add samba passwords for user accounts.

      • #3095503

        What’s important in a computer?

        by firstaborean ·

        In reply to What’s important in a computer?

        This topic fits right into my attitudes.  What I think is most important is true customization of configuration with full standardization of hardware and software.

        I’m a writer.  When I wished to procure a new computer to use at home, I quickly discovered that such as I wished was not available in stores or from the likes of Dell and Gateway, so I learned what I needed to learn and built my own.  In order to get software to do things in exactly my own way, I ended up with three word processors running in two of the three operating systems I installed on it, and some hardware very much up-to-date and some not.  One drive comes from around 1983 and gives me compatibility with my oldest data media.  By going the home-brew route I got exactly what I wished to have.

        It’s not only hardware.  For instance, I can work in what I call “job orientation,” where, instead of a mouse click loading one data file into an application, I click on one icon which customizes that application for that job, then loads all the job into windows within that application with cursors in the same positions as when I’d previously worked on that job.  This works easily with Microsoft Word for DOS, but not at all with Winword.  I use both, and I wish it were possible with Winword.

        I do wish computer makers would stop making their machines with proprietary hardware or versions of the operating systems.  It causes instability sometimes, and it gives one another reason for going the home-brew route.

      • #3095355

        What’s important in a computer?

        by michael_orton9 ·

        In reply to What’s important in a computer?

        Whats important?

        Well partitioning is not an issue either in 98se XP or Linux as I have a copy of Partition magic 8. FDISK? Forget it!

        Security is an issue and this depends on what sites I am accessing. If its a very dodgey hacking site I remove the main HD (its in a caddy) stick in a 1 gig HD(In the second caddy) that has a 750 meg FAT32 partion (to save anything I want) and a 250 meg Linux swap.

        I then stick in a DSL Linux live cd with a few extra tools (mainly strike-back software) and boot up.Then I have a very safe system with Firefox browser, and I can visit the most hazerdous web site with no problems. If the scrap 1gig HD is infected, I just use Nortons and PM to recover it and reformat it.But for normal use on normal sites XP is fairly safe and SusE Linux 9.3 pro is a bit better, but to be fair nothing is really safe other than the DSL set up above (I always boot from a cd-rom rather than a cd rewriter).

        AS to software, I just depends on what the task is. My XP systems have loads of software on them, virtually anything anybody could want.   With SuSe I can download about anything I want too.

        If the PC gets too slow I get another m/b and processor and then move it down to one of the less good PCs etc. (I have 5 at present) and so on. Unless it burns out I don’t throw it away, I just hand it down to the next best PC. Win98se and Linux is no problem but XP needs re-registering which is a pain and another good reason to switch to Linux. SuSE doesn’t seem to mind if you switch the processor and M/b from INTEL to AMD even.

        So if you don’t find your PC does what you want it to do, Its up to you to modify it until it does. If the firm has a policy of MS Only you may find that you have great problems. Even when I was working on the 5th largest IT program in the EEC (alleged by my old firm!) it was impossible to do some of our vital tasks without installing “illegal” software or add-ons. We just got more proficient at hiding such software!

        Modern PCs are so quick and cheap to upgrade even over the lunch hour for stuffing in extra ram and a better video card, nobody should put up with a sub-standard PC.

        Another processor and m/b should be up and running (Linux /98se NOT XP) in under two hours.

        Mike Orton

         

         

         

         

      • #3094862

        What

        by sable.eminence ·

        In reply to What’s important in a computer?

        It may not occur to you when you shop for the ultimate gaming rig, but power efficiency is a long-overlooked aspect of computing that ought to get more attention. In an age where the market for computers is consumer-driven, we are seeing power consumption being forgotten in the name of added horsepower and features. I am typing this from a dual Athlon MP workstation, and I can attest that power consumption makes me wish for a sufficiently powerful, not necessarily workstation-class laptop that I can leave on for as long as I like, yet still be able to work and play on. Even with an economical TFT monitor, the operational cost per month still dents my budget; managing a network full of computers that ought to be more efficient makes me cringe.

        Today’s hardware seems to be defined more by “want” than “need”; everything I do save for work with virtual machine environments and video encoding can be done by a humble Celeron 400A running Windows XP, yet one look at off-the-shelf computers will show you over-specced components. It’s hard to find a humble discrete graphics card suitable for office/2D machines such as the venerable ATi Rage Pro PCI; integrated graphics bring their own sets of problems and in my opinion should be avoided, but one no longer has a choice as these cards have been discontinued. Sure, the graphics core contributes but a fraction of the total system draw, but raindrops make a puddle and these things add up.

        Intel’s Pentium-M series is a step in the right direction – compute and power efficiency in lieu of brute force. It sounds strange coming from an AMD fan and power user, but computers shouldn’t cost an arm and a leg to enjoy.

      • #3094741

        What’s important in a computer?

        by donaldcoe ·

        In reply to What’s important in a computer?

        Answering this ultimate question can be both difficult and as easy as using the YB Yellow.

        Keywords and Phrases: Versatility, Salesman Trust, Pre-mission Planning, Detailed Knowledge of the Terminology and Shortwords, and  Dollars, Euros, Pesos and or your Coin of the Relm

        I have devoted more than half my adult years to the Information Technologies arena, and trust me when I say,  there is NO Quick and Dirty answer, without some pain and blood letting (in other words forcing the old fluid through the old noggin with some heavy research and goal-planning). Successful gamblers learn by losing at first and then by making the other guy lose more. A phrase I picked up while in the military and now carry with me as my guiding light is the less graphical version  for our ear sensative public, called the 4 P’s rulePoor Planning leads to Poor Performance“.  I have owned-sold-built some 60 PC’s since 1992 and learning the hard lesson of how manufacturing tends to cut corners to maximize the differences between production cost and profits, to survive in the business playpen. This experience forced me to learn to build my own PC performers, so the only one to blame is myself for the corners cut.  These are my experience insights:

        If you are a family man with wife and kids the minimum quanity of PC’s owned is 2 (yours and theirs);

        Trust NO ONE but GOD (in person) with ACESS to your PC;

        If multiple gamers reside in your household, every gamer has to have his or her own;

        MAX the System Memory and MAX the Hard Drive capacity when you buy it

        NO On-board (built-in) Graphics cards will DO for any rig;

        Damn the Torpedoes with 500 watts or more from the Power Supply;

        If it (the PC) is 3 years old or more move on not just upgrade- “Stop Dragging the Box of Rocks”.

        The successful end of this process will break your wallet, but I guarantee ultimate happiness without the Health Harming Stress.

      • #3095897

        What’s important in a computer?

        by jmgarvin ·

        In reply to What’s important in a computer?

        My wish list:

        Operating System:

        1) Built in security.  I want my OS to come with a firewall (somewhat pre-configured), a root or admin account that is accessed only via something like su, and services turned OFF

        2) Ease of use.  I want my OS to be intuitive and map to the meat world as much as possible

        3) More modularity.  Don’t make me go down a path I don’t want to, give me modules!

        Applications as part of the OS:

        1) Built in Office suite of any kind.  I need a decent word processor, spreadsheet, and simple data base program

        2) Built in CD/DVD ripping and burning software.  With the ubiquity of CDR/Ws and DVDRs, why not build this stuff in?

        3) Native support for various removable devices.  NOTE TO THE INDUSTRY, USB stuff is SUPPOSED to be generic…I shouldn’t have to use one OS or another to use your product.  The point of USB is that I can just plug it and and pull or put data on the device…bunch of knobs.

        Gaming:

        1) For the love of god, how about some gaming support for gaming hardware?  Things like joysticks and high end mice!

        Hardware:

        1) Drivers, drivers, drivers.  Any company that does not support Mac and/or *nix, needs to get with the times. 

        2) On that note, how about writing good drivers to begin with? 

         

        Final thoughts:

        For the most part the industry is better than it was 10 years ago…but we’ve also kept some of the same mentality.  It bothers me that we have things like USB, but there are still OS specific USB devices (it doesn’t make sense).  I’d also like to see the industry move forward with portable software.  The OS SHOULDN’T MATTER, but it does.  Software should be able to run anywhere, natively.  We’ve gotten lazy.

      • #3095200

        What’s important in a computer?

        by rpinson ·

        In reply to What’s important in a computer?

    • #3096259

      brevity in code

      by apotheon ·

      In reply to bITs and blogs

      I’ve decided, due to some discussions here and on programming
      mailing lists, and as a follow-up to my own recent Soapbox post about
      programming elegance, to post some examples of Hello World programs in
      various languages with very brief analyses.

      C:

      #include <stdio.h>
      main()
      {
          {
              printf ("Hello World!\n");
          }
      }

      This program contains two or three lines of code, depending on whether
      you count the include statement: I do, so I’d call it three. I’m going
      to be kind to languages that require multiple lines and use braces to
      enclose blocks of code, so I won’t count the closing braces in this
      example (or in other examples, like Java). More important than the
      lines of code, though, is the number of syntactic elements used to
      achieve your aim: in this program, depending on how you define discrete
      syntactic elements, there are three to five syntactic elements. I’d
      call it four.

      Java:

      class HelloWorldApp {
          public static void main(String[] args) {
              System.out.println("Hello World!");
          }
      }

      This program contains three lines of code. In this program, depending
      on how you define discrete syntactic elements, there’s anywhere between
      eight and twelve syntactic elements here.

      Perl, Python, and Ruby: I’m going to include all three of these in
      one explanation, because the three are amazingly similar when you’re
      writing a Hello World, even though what’s going on behind the scenes is
      quite different for the three.
      Perl:

      print "Hello World!\n";

      Python:

      print "Hello World!\n"

      Ruby:

      puts "Hello World!"

      Each of these languages provides a simple, one-line program as a Hello
      World, with exactly two syntactic elements — no more, and no less.
      Perl requires semicolons as line terminators, while in Python you can’t
      use semicolons as line terminators, and in Ruby you can use them or not
      at your leisure. Both Perl and Python require a newline escape
      character (the “\n”) to insert a new line after the “Hello World!”
      text, while Ruby has both a “print” method that works the same way and
      a “puts” method that automatically adds a new line at the end of the
      string (“puts” means “put string”).

      Visual Basic .NET:

      Imports System
      Public Module modmain
         Sub Main()
           Console.WriteLine ("Hello World!")
         End Sub
      End Module

      This program contains either five or six lines of code, depending on
      whether you include the Imports statement. I do, making it six. As for
      discrete syntactic elements, this thing has between nine and fourteen
      syntactic elements, depending on your definitions. I call it about
      eleven to thirteen, depending on my mood on a given day — after all,
      I’m a hacker, not a linguist. C#.NET looks similar, but actually
      accomplishes the same with four lines of code and about ten syntactic
      elements, making it an easier language to use as well as being a
      technically better language by far than VB.NET, despite all the “VB is
      easy!” marketing hype to the contrary. The only thing easier about VB
      is this: it’s easier to write bad code, because its OOP features are
      crap, and its structure is line-oriented.

      • #3096057

        brevity in code

        by wayne m. ·

        In reply to brevity in code

        Just to be fair, let’s look at what the extra syntactical elements bring, structure.

        The C, Java, and VB.Net code samples implement not only an operation, but also segment the code into a method and a module/class.  The argument in favor of these extra structural elements is in the support of larger coding problems and thus are not evident in this example.

        There are two arguments in favor of applying a method structure on code segments.  One, breaking up the code allows it to be separated into comprehendable sections for development and maintenance.  Two, by creating a higher unit of code, it allows methods to be recalled from multiple locations in a program.  Presumably, this leads to a reduction in the overall number of code lines and syntactic elements.

        It is interesting to note that all of the above examples make use of this code structuring mechanism to allow the display of “Hello World” to be done in a single line of code.  The amount of underlying code shows the true power and worth of code structure.

        Definitions of elegance are personal and subjective, but for many it involves the appropriate structuring of code into ever higher levels of abstraction.  This structure is explicit, thus it requires some overhead, and it is certainly a valid question as to when the value of structure outweighs the cost.

         

      • #3096027

        brevity in code

        by apotheon ·

        In reply to brevity in code

        Wrong, Wayne. Perhaps you’re not aware of this, but to take Ruby as an example, “puts” is a method.
        The omission of self. before the method name is essentially syntactic
        sugar, contributing to the elegance of implementation by way of
        implicit structure where such implicit structure follows the principle
        of least surprise.

        In fact, Ruby is more object oriented than Java and VB.NET put
        together, let alone either one alone. Part of the difference that allows the omission of line after line of Java-style scaffolding is that Ruby tends
        to avoid forcing the tedium of repetitious, unnecessary busywork, and part of the difference that allows the omission of line after line of VB-style program logic is the fact that Ruby isn’t a line-oriented programming language shoehorned into an OOP role.

      • #3095314

        brevity in code

        by charliespencer ·

        In reply to brevity in code

        I must be the only guy in the field whose first program was gas mileage (prompt for input of miles traveled and gallons use, calculate and display gas mileage), not “Hello World”.

      • #3094880

        brevity in code

        by aurelije ·

        In reply to brevity in code

        “Both Perl and Python require a newline escape character (the “\n”) to insert a new line after the “Hello World!” text”

        Do not know for Perl, but in Python is not required a newline escape character. Also string can be written more simple as ‘some string’ so your example should be:
        print ‘Hello World!’
        which is 1 key less then Ruby 🙂

      • #3094698

        brevity in code

        by flip-flop-flam ·

        In reply to brevity in code

        You missed one of the most elegant languages out there, Haskell

        main = putStrLn “Hello World”

        Simple and straight forward 🙂

      • #3094940

        brevity in code

        by akhasha ·

        In reply to brevity in code

        In Python, the print function supplies its own end of line implicitly, unless the last ‘argument’ is a comma. Python also allows the use of semicolons to put more than one statement on a line, but this is not recommended as it usually makes code harder to read.

      • #3079601

        brevity in code

        by jaqui ·

        In reply to brevity in code

        a couple more examples:

        PHP:
        <?

        echo “Hello World”
        ?>

        C++:

        #include <iostream>
        #include <stl>
        using namespace::std

        int main();
        {
        std::cout<<“Hello World\n”;
        return 0;
        }

        The std:: could be omitted from the cout for brevity and clarity, but
        it’s a better habit to use the namespace every time, so that when you
        have several functions, some of which are using thier own namespace you
        know what namespace a specific call is going to easily, without having
        to search the code to find out which it is using

        and, for those who don’t know, with c++ all functions must have a type
        that returns a value, so a function declaration as in apotheon’s c
        example is not valid c++, it is cast as a void main and that is an
        illegal construct.

    • #3094812

      cuppa joe: Java In Context

      by apotheon ·

      In reply to bITs and blogs

      In my recent essay here entitled Elegance, I made a number of statements about what constitutes elegance in programming. Because I wanted to make these statements accessible, I used examples. Java became the subject of much exemplifying. Predictably, considering the number of Java programmers and proponents out there, I started getting a lot of reaction to my comparisons of Java language features to those of other languages.

      I’m actually just a touch surprised by the common assumption that I was making a categorical denouncement of Java. On the contrary: there’s a lot that Java does right, and a lot of what I said that others are perceiving as pejorative is actually indicative of some positive regard for Java.

      Given my druthers, I write code in other languages than Java. I don’t tend to work on large, organized, long-lived software development projects. I’m not part of a corporate code factory writing “enterprise” applications, so the benefits of Java don’t much apply to me, and in any case I don’t find Java source code to be particularly fun to work with. I enjoy the sort of challenges that arise in programming: that can be helped or hindered by a number of factors, and I find that (for me at least) Java suffers from a lot of hindrances to that. The painful tedium of repetitive scaffolding generation required to write a Java application, for instance, gets old pretty quick. Just looking at a hundred line Java program’s source can put me to sleep.

      That doesn’t mean the Java language is all bad. It provides some distinct benefits when working with a number of other programmers in circumstances where you don’t get to choose your fellow programmers on the team, for instance: much as I love Perl for lone code hacking, it’s not the language of choice for that same situation by any means. Perl does provide far greater opportunity for elegant program design, but it also provides far greater opportunity for inelegant program design. Sometimes, you’re better off limiting your potential for poor, rather than maximizing your potential for good quality.

      There’s no law that says that Java’s suitability for limiting the damage mediocre programmers can do requires that all Java programmers be mediocre. There are a great many quite excellent programmers in the world. In fact, the authors of The Pragmatic Programmer, one of the best books on programming practice ever written, use Java code quite extensively to illustrate their points in that book. They’ve written several other books focusing on Java, as well. I think they have more books related to Java than to any other two languages put together. Clearly, Java has its uses, and the Pragmatic Programmers prove that Java isn’t solely the tool of mediocre programmers.

      One of the biggest millstones around Java’s neck, actually, is its implementation. The JVM implementation is broken: it combines the worst aspects of both compiled and interpreted languages (as opposed to the Perl way of handling it, with a hybrid compile/interpret-at-runtime implementation that works quite well), all in the name of Java’s mythic portability. Unfortunately, as the scores of “write once, run nowhere” jokes attest, it fails in its goals rather too often.

      Java is certainly not suited to “scripting” use. It’s far too cumbersome as a language for that. If a good, solid compiler for Java were developed, though — something that would put it on a level with C++ — it would suddenly become a far more effective language in practice. In fact, it would begin to be much of what it already claims to be: an excellent application programming language. From a certain perspective, it already is an excellent application programming language in theory, but its implementation completely screws that up in practice.

      Another one of the big millstones around Java’s neck is the fact that it’s billed, and used, as a panacea. My biggest gripe in this area is its use as a server-side web programming language. Whatever idiot first thought a language run in a bloated, clunky, slow, flaky VM optimized for portability should be used as a server-side technology was clearly sprinkling PCP in his breakfast cereal.

      In short, Java has its uses, just as C, Perl, and Fortran have their uses. Just as I’d never recommend Fortran for lightweight system administration, though, Java should be used where it’s best suited to be used, rather than simply everywhere. It has its strengths, but it also has its weaknesses. Don’t forget that as you rush to join the knee-jerk reactions to my use of Java as an example of the differences in language design.

      • #3079562

        cuppa joe: Java In Context

        by debuggist ·

        In reply to cuppa joe: Java In Context

        Mythic portability? Really? I’ve written Java code, compiled it on
        Windows and ran it in a servlet container or as a standalone
        application on Windows. Then I ran the same code on Solaris and Red Hat
        Linux. Or are you just stating the “anywhere” part of “write once, run
        anywhere” is a stretch?

        You should do some research into the performance characteristics of the
        current VM; it has come a long way since 1996. Or if you have, please
        share it. I don’t keep such links handy for quick posting, but I know
        the current VM performs better than you’ve described it.

        As for “scripting” use, I’ve seen it used that way in JSPs. There are
        not enough adjectives to describe how ugly that is. People get caught
        up in what they can do in a programming language rather than what they
        should do.

      • #3078646

        cuppa joe: Java In Context

        by jaqui ·

        In reply to cuppa joe: Java In Context

        Doug,
        then you actually went and fought to get the same java vm installed on the unix system?

        the biggest portability issue with java is the number of semi compatable vm’s for every os but windows.

        what vm did this programming team use? am I going to have to install a 5th java vm to use it?
        [ both questions are what has to be asked when using any java program on any os outside of windows ]

        until I stopped using java based programs, I frequently had to have 6
        or 7 different java vms installed for each to work correctly. this is
        not conducive to a favorable impression of any java based program.

      • #3078520

        cuppa joe: Java In Context

        by debuggist ·

        In reply to cuppa joe: Java In Context

        Jaqui,

        We have had very different experiences with Java, because I have not
        had any of the trouble that you are describing. Most likely it’s due to
        differences in our roles.

        With whom has anyone had to fight with to get the same VM installed on a UNIX system?

        Regarding “the biggest portability issue with java
        is the number of semi compatable vm’s for every os but windows,” can
        you give me some references where I can learn more about this?

      • #3078508

        cuppa joe: Java In Context

        by dwrandolph ·

        In reply to cuppa joe: Java In Context

        Mythic portability? darn right!

        Of the corporate tools I have to use at work, some are web /
        Java.  Each one REQUIRES a specific JVM, not all of which can
        coexist in a machine.  I have 1.3.1_08, 1.4.1_04, 1.4.2_08 (this
        app breaks if JRE 1.4.2_10 is installed), 1.4.2_10 (this app I have to
        run from another machine to avoid 1.4.2_08 conflicts), 1.5.0._06. 
        There were patches to 1.3.1, but again that application requires a
        specific version, I had to uninstall the update and let the app
        reinstall the one it wanted.

      • #3079246

        cuppa joe: Java In Context

        by jaqui ·

        In reply to cuppa joe: Java In Context

        Doug,

        the battles I’m referring to are making sure the correct vm is loaded,
        before starting the java application, since the wrong vm with make the
        application crash.

        with linux it’s simple to install all of them, it’s keeping track of
        which vm is required for which application, and making sure it’s the
        one that loads, unloading any others that may be running, when you go
        to start the application.

        way to much of a pita, I’ll just stick with proper executables that do not require a vm.

      • #3099214

        cuppa joe: Java In Context

        by debuggist ·

        In reply to cuppa joe: Java In Context

        Jaqui,

        You state valid issues, and I’ve seen this sort of thing before but not exclusively with Java.

        All too often the development of software is of primary importance;
        however, organizations fall down on how to handle it once it’s live.
        Deployments become a hassle, and that makes the software itself look
        bad.

        I would wager that if Sun and other Java software vendors made
        deployments simpler, your opinion of Java would greatly improve. Rather
        than improving and adding language features, perhaps Sun (and others)
        should make it easier to deploy Java software.

        Developers usually don’t feel the pain of this side of things, so
        perhaps the Java Community Process needs more input from operations
        staff like yourself.

      • #3099152

        cuppa joe: Java In Context

        by apotheon ·

        In reply to cuppa joe: Java In Context

        mythic portability and VM compatibility

        The problem, here, is in the claims of portability. The language itself has very little to do with the portability of code written in that language, once you get into the realm at least as high-level as C. The real determining factors are the programmer writing the code, the functions, objects, methods, libraries, et cetera that are available and used, and the implementation. What this means in the case of a language with the prodigious availability of libraries and the like that Java has is that claims of the language’s portability can be pinned pretty much entirely on the implementation, in this case the virtual machine.

        Since VM portability for Java bytecode has nowhere near the portability across implementations (as Jaqui pointed out) that Sun and its other proponents claim for it, Java’s portability is indeed “mythic” in several senses of the term. If you want truly portable code, check out the ubiquitous “just works” behavior of C/C++ and Perl implementations across multiple platforms.

      • #3098613

        cuppa joe: Java In Context

        by debuggist ·

        In reply to cuppa joe: Java In Context

        Our definitions of portability for Java most likely differ due to the
        way we use it or how long we have been using it. My understanding is
        that Sun’s original interpretation of portability was in regard to OS.
        This would make sense when only one version of the VM existed. I still
        see this understanding among others when they compare Java and .NET.
        And I’ll concede that Java is not 100% portable in this regard.

        Another interpretation is in regard to different versions of the VM,
        which is what you and Jaqui state. That’s a valid one and
        understandable, now that Java is 10 years old with a multitude of VMs.
        In that sense, portability may become less mythic but not altogether
        100% reliable over time. I’m not a member or a regular observer of the
        JCP, but it’s definitely worthwhile that they make this a priority.

    • #3098629

      reply intelligently to email

      by apotheon ·

      In reply to bITs and blogs

      This is a brief bit of instruction in how to reply
      intelligently to email. It is inspired by Tech Juggler’s
      somewhat humorous How to reply to an email
      post, in which he carefully addresses how one can reply to an email
      using the Yahoo! webmail interface. While I don’t tend to have the
      problem he has, with people simply not replying (maybe I’m just more
      likable), but I do occasionally see problems with people replying
      stupidly. That being the case, I have decided to give you guys some
      instruction so you won’t come off as a complete idjit the next time you
      reply to an email. Here are ten things to do to reply intelligently to
      email:

      1. Don’t top-post. Yes, it’s true, these days most mail clients for
      Windows and most webmail service interfaces place your cursor at the
      top of the quoted text to which you’re going to reply, but it is worth
      your while to go the extra step of clicking your little mouse pointer
      at the bottom of the message to add text in the correct place. The
      reason you shouldn’t top-post is simple: top-posting destroys logical
      flow of conversation. It makes sense, at times, to add some kind of
      editorial note at the top of the email about what you’re going to be
      saying, but for the actual reply text it makes far more sense to add it
      after what the other person said. It is acceptable at times to do
      inline replies, where you break up the other person’s text into
      sections with replies to each of them, but in this case it becomes even more critical for you to post your replies to each of these sections after
      what was previously said, rather than before. There are actually
      mailing lists and newsgroups out there where top-posting can get you
      banned: it is that annoying. This rule ties in with the next, about
      cropping text. For a demonstration of this principle in action, see
      this example:
      A: It reverses the normal flow of conversation.
      Q: What’s wrong with top-posting?
      A: Top-posting.
      Q: What’s the biggest scourge on plain text email discussions?

      2. Crop text of messages to which you’re replying. Only keep quoted
      text from a previous email in your reply if it is relevant. Nobody
      likes to reread, for the sixteenth time, the thirty paragraphs of
      preceding conversation when it doesn’t even pertain to what you’re
      saying in your reply (or even to have to scroll past it for the
      sixteenth time). Save bandwidth, annoyance, time, and reasons for
      people to decide you’re an aggravating nimrod. This is especially
      important when your reply consists of something like “I agree.”

      3. Send plain text emails only, unless you absolutely positively
      must send an email with other content ? especially on mailing lists.
      People who use text-based email clients, like me, don’t like getting
      spaghetti HTML code in their inboxes. Also, you look less like a
      spammer if you don’t send HTML emails when you don’t have to.
      Furthermore, you’re less likely to spread email viruses and the like if
      you only send text-based emails (and you’re less likely to get infected
      by them if you view all emails as plain text without markup
      interpretation). When all you need to do is get a collection of words
      from point A to point B, there’s little point in cluttering it up with
      a bunch of nasty markup, and there are a lot of reasons to avoid it.
      I’ve heard people complain about techies giving them guff about things
      like HTML emails and other utterly gratuitous misuses of misfeatures,
      but it strikes me as odd that people never pause to consider that,
      being techies, they probably know something you don’t.

      4. When responding to email, and you get angry with what has been
      said, take a moment to make sure you aren’t misinterpreting what the
      other person said. Read the email a second, and maybe third, time:
      while doing so, do your level best to interpret the phrasing in a
      positive, rather than negative, manner. Don’t make an ass of yourself
      by jumping to conclusions that simply aren’t accurate. Tone doesn’t
      carry as well online as in person, usually, so it makes sense to be
      more sure of your interpretation before flying off the handle at
      someone over some perceived insult. It is permissible, and even smart,
      to ask if someone means to be insulting most of the time, in case you
      aren’t sure. It is just downright stupid to always assume the worst
      without confirmation.

      5. When asking for help, be polite, gracious, and humble. Humility
      isn’t always necessary, of course, but it serves as a good substitute
      for actually knowing the limitations of your own knowledge, and most
      people aren’t so good at recognizing those limitations. So: be humble.
      Too often, I see someone new to a mailing list relating to some
      technical matter show up and start complaining about what’s wrong with
      such-and-such, based on his experience with something almost unrelated,
      because this new thing doesn’t act the way he expects it to from that
      unrelated experience. This happens with people moving to Ruby from
      Python or Java, people moving to Linux from Windows, and so on. Realize
      you’re in new territory, and don’t (yet) know everything. Act
      accordingly. Realize that others are not likely to respond positively
      to you if you don’t take this very reasonable approach. You should also
      be polite, gracious, and humble even when excoriating some troll for
      his imbecilities, and perhaps enjoy using words the dimwitted,
      unregenerate nematode probably doesn’t understand.

      6. Use a spell-checker. You should probably use a grammar-checker,
      too. I don’t use either, but I’m a spelling and grammar guru with the
      English language, so I’m the exception. You, dear readers, are likely
      to be the rule. I recommend finding better spelling and grammar
      checking applications than Microsoft Word, as I see it make mistakes
      all the time. You should also proofread your email by eye to ensure the
      spell-checker and/or grammar-checker didn’t miss something, and to
      ensure you didn’t do something stupid like accidentally attribute the
      “Reply to an email” post to master3bs instead of Tech Juggler (What?
      Me? No, I wouldn’t do that!).

      7. Use the reply button when actually responding to a previous
      email. People using threaded email clients to help keep discussions
      organized like threads to remain intact so they (we) have a better
      sense of context. This is especially important when replying where your
      email will be read by people who receive lots and lots of email every
      day (like me). Pure chronological organization of email may be fine for
      Grandma, but for someone like me subscribed to a dozen tech-related
      mailing lists, some of which get more than a hundred messages per day,
      threaded email is of critical importance in being able to find and
      understand anything going on. These tools exist to make it easier to
      parse large quantities of data very quickly. Try to avoid screwing it
      up for us.

      8. Don’t use the reply button when you’re starting a whole new
      discussion. Why the heck would I want to see a thread about how to
      configure Gaim to use the system speaker instead of defaulting to the
      sound card crop up several layers deep in the middle of a thread about
      electronic voting machine vendors having to share their source code
      with state governments? I’ll give you a hint: I wouldn’t. That’s
      especially true if I decide I’m done with the voting machine thread,
      but wanted to read the Gaim thread, and hit Ctrl+D in mutt (my email
      client of choice). That deletes the entire thread, including your Gaim
      subthread. In short, don’t screw with email threading: reply when
      you’re actually replying, start an email from scratch when you’re
      starting a whole new conversation
      . Capisce?

      9. There’s a well-known rule in some circles, stating that email
      signature blocks should be no more than four lines long, at no more
      than 80 columns (characters) of width. There’s a very good reason for
      this: we don’t want to spend more time reading your signature than your
      email body text. Most people’s signatures aren’t nearly as clever as
      they think. The exception, of course, is when corporate-mandated
      disclaimers have to be attached to the ends of your emails, but that
      doesn’t mean you should create your own disclaimers that are longer
      than four lines at 80 characters. Don’t be a completely gratuitous
      windbag, please. Now, the four lines rule is realistically a guideline
      more than a rule, in most instances, but there are those who are more
      strict about it than others, so plan accordingly. I find that it makes
      sense to limit my signature blocks to no more than four lines at about
      70 or 75 columns’ width, in case someone’s email client includes my
      signature in quoted text in a reply so that it ends up getting shifted
      to the right a little bit. The reason for the 80 columns’ width, of
      course, is traditionally to accomodate those who are working on
      old-school 80 column displays. Be kind to your email-reading brethren
      of all screen resolutions.

      10. Use hard line-wraps at less than 80 characters in your emails. I
      recommend something between 70 and 75 characters, to allow for padding
      out when others quote your text in replies. It makes it easier to read
      a plain text email when it is limited to about 80 columns’ width than
      when it runs all the way to the right-hand edge of the screen or, worse
      yet, past it (if someone is using a mailreader that doesn’t wrap lines
      dynamically).

      Bonus. Here’s your eleventh piece of advice for replying
      intelligently to email: Don’t attach files, especially big files, if
      they aren’t necessary, or forward crap from spammers, or otherwise send
      asinine content nobody wants to read. Seriously. Why would I want 47MB
      of attachments and spam in my inbox because you decided to forward a
      bunch of useless crap to me? Here’s a hint: I wouldn’t.

      So, once you get the hang of the basic functionality of creating and
      sending a reply, as instructed by TR user Tech Juggler, you can start
      working on not coming off as a screaming imbecile when you do so. Thank
      you for your time, and have a nice day.

      • #3098520

        reply intelligently to email

        by master3bs ·

        In reply to reply intelligently to email

        This is an excellent list. I think TR should include
        this as a download.

         I was somewhat dismayed to learn that my email intelligence was around 90%.
        Unfortunately, I have been guilty of top posting. Generally, I
        consider myself to have good online etiquette, but I was never sure what to do
        about top posting. You will be glad to know that this blog has cured me
        of the habit.

        I passed your other guidelines with flying colors, with two caveats. I
        use Office to spell check my messages, and my corporate email is required to be
        several lines long.

        Again, this is a great list. I’m pleased to have inspired it.

      • #3098013

        reply intelligently to email

        by j alley ·

        In reply to reply intelligently to email

        Generally a good list but:

        1. You violated your own rule “be a completely gratuitous windbag” – but it was mostly amusing.

        2. I don’t agree completely with your rule about top-posting. I like top-posting when I am corresponding with one or two others and I mostly always delete any parts of the thread that are not relevant. In that context, I know what the thread was and I don’t want to scroll through it again.

        In all other cases it is important to go with the flow (as in all matters of etiquette). If you are mailing to a group where everyone top-posts then follow that approach. If you are the first to reply or if everyone else is doing it, then by all means bottom post. But above all else, take the time to think about the way you reply.

        Don’t post in-line unless you are in a situation where you are pretty sure no-one else is going to reply. Following an interleaved thread is almost impossible.

         

      • #3097945

        reply intelligently to email

        by apotheon ·

        In reply to reply intelligently to email

        Generally a good list
        Thanks!

        You violated your own rule ‘be a completely gratuitous windbag’ – but it was mostly amusing.
        Of course I did. Great fun all around.

        I don’t agree completely with your rule about top-posting.
        That’s only because you’re an uncultured heathen.

        I know what the thread was and I don’t want to scroll through it again.
        Then why don’t you just cut out the parts you don’t want to scroll through again?

        Don’t post in-line unless you are in a situation where you are pretty sure no-one else is going to reply. Following an interleaved thread is almost impossible.
        Odd, nobody seems to have such problems in any of the discussion threads I’ve ever run across and, in fact, I have never heard of anyone complaining about such a thing before, ever, while holy wars between Outlook Express indoctrinated top posters and “everyone else” bottom posters seem to crop up semi-regularly. In fact, I’ve seen discussions that would have been almost entirely impossible to follow without interleaved posting because responses to divergent points in the same email were made.

        Maybe that’s just me, though.

      • #3097937

        reply intelligently to email

        by master3bs ·

        In reply to <