Linux

Open source versus proprietary: Does the debate really have to be endless?

The debate over which software purchasing/licensing protocol is better, open source or proprietary, continues to percolate in the TechRepublic discussion boards. We analyze the current debate with an eye toward establishing common ground and a rational perspective.

In the past few months, the debate concerning which software purchasing/licensing protocol is better, open source or proprietary, continues to percolate in the TechRepublic discussion boards. On one side of the debate, we have the camp that distrusts the motivations of the open source community and dismisses the viability of an economic model that does not revolve around the exchange of wealth. On the other side, we have the camp who distrusts the multinational conglomerate software giants and their preoccupation with earnings and control. Stuck in the middle are people like me, who have no technological agenda to champion, but instead just want to get some work done using the best software they can get their hands on.

While the debate will probably never completely die, perhaps we can help turn the heat down enough to reach a slow simmer by analyzing the current debate with an eye toward establishing common ground and a rational perspective.

Foot in the door

The one significant change in this ongoing debate has been the increasing acceptance of open-source software as a viable alternative to traditional proprietary applications by the decision makers who control the enterprise purse strings. As I suggested in a recent discussion thread:

The question is no longer whether open source is viable, but whether IT professionals will install such applications in their respective organizations. This subtle change means that open source software is now competing on the basis of effectiveness, cost, reliability, security, and features rather than merely trying to get a foot in the enterprise door.

However, when it comes to Linux, not everyone is convinced it is ready for the end-user desktop. For example TomSal argues that:

Linux won't be on the desktops soon -- maybe a mondo huge companies OR companies that the nature of their industry means all their users are engineer types; but the average Joe small business shop with users who need their helpdesk to figure out how to change a default printer or change a font in Word, EXTREMELY unlikely that Linux will find a home there anytime soon.

Linux needs to be friendly to the "AOL" crowd first..then we as IT folks can bring serious arguments to our CIO/CFO and CEO's to why Linux is a good switch for the corporate desktop.

I used to think along similar lines myself. I believed that Linux required the use of long command line strings with weird syntaxes and cryptic parameters. Point & Click Linux, which was recently featured on TechRepublic, changed my mind. The MEPIS Linux distribution version included on the CD accompanying the book is about as simple to install as it gets. The “Linux is too difficult to install” mantra just doesn’t hold water anymore. As apotheon observed:

I find that end users are even less likely to be able to install a Windows OS properly without help than certain very user-friendly Linux distributions. SuSE comes immediately to mind. Fedora and Progeny Debian both use the Anaconda graphical installer, which is more friendly and easier to use than the Windows installer. MEPIS is so incredibly easy to install on the hard drive that it beggars the imagination.

Even Debian (my personal favorite distro for a great many reasons), once used as the poster child for how difficult Linux is to install, has had its installation process revamped to the point where installing the base operating system consists of little more than accepting default options.

Anyone trying to make a case for sticking with Windows based on how easy the OS is to install has not been paying attention.

Summing it all up

But even if you accept the idea that Linux and open source software is ready for end-user desktops in many enterprises, you may still have to overcome other barriers to more widespread acceptance. Security concerns revolving around the implementation of software are a major concern for any organization, especially if regulatory and legislative compliance is a part of the equation. Enterprise decision makers need assurances that the software deployed within the organization is secure. This is the concern expressed in a TechRepublic discussion thread started by KaceyR:

In keeping up with the Open Source software movement, I've come across a single, basic flaw.

The only way to ensure that your executable is as it should be, is to perform a comprehensive review the source code and to recompile it yourself.

I can, very easily, set up a distribution web site that contains both the source code and compiled executables, complete with my own hooks in the executables that will do whatever I want them to. The typical user will download the executables, maybe even the source, but will never perform a compile, and I certainly won't have my hooks in the source that they can review.

Without a complete review of the source code and an independent compile yourself, you have absolutely no assurance that the code you are running matches the source code that it's supposed to. Should that code damage or otherwise compromise your system, what's your recourse? Rebuild your system.

In addition, if you have the time and intellect to review and completely understand the source code, why are you wasting your time downloading someone else's product when you can make your own with the same level of effort?

By example, let's say you download a copy of Firefox, and it's been tweaked with a hack that allows an external user into your system. You're browsing around the internet and everything is great, then one you realize that you've lost all of your data. During a post-mortem, you discover that Firefox was the culprit, so you go after the developers at Mozilla. Oops! The signature of the executable doesn't match ANYTHING the original developers have ever released. They're not responsible. Time to rebuild your system.

Now let's say that you're running proprietary software and the same thing happens. During the post mortem you discover the culprit is the ABC product from XYZ company. The file signatures are compared and, sure enough, they match. XYZ company is clearly responsible, so they will be inclined to assist you in determing the exact cause and fixing the problem, as well as you (possibly) having a legal recourse against XYZ company.

This is both a level of protection and a level of assurance that the program will perform as expected.

Companies today are very paranoid (and rightly so) about system intruders and industrial espionage. With this in mind, why would you turn to Open Source software? 

In a nutshell, this argument is probably the main reason more companies have not implemented open source software solutions en masse. Whether it is deserved or not, the perception has been that open source is not as robust as proprietary software—that, when it comes to support and accountability, there is no single entity whose proverbial feet you can hold to the fire. Backers of open source software have their work cut out for them when it comes to overcoming this lingering stigma.

Perhaps the best answer to the “is open source software secure” question posed by KaceyR was expressed by bhoule:

Several years ago, a long time commercial database product (Borland InterBase) was turned over to the Open Source community. Only after its public release was it discovered that an unpublished "back door" previously existed to bypass the security of both the DB and the OS it ran on.

How can you be sure that the same situation does not exist in other commercial products? Without access to the code, how will you ever know? With open source, at least the possibility exists that nothing similar will sneak by. But does this mean that everyone should eschew commercial software? Certainly not.

Open Source is not a cure-all, nor will commercial vendors ever drive the open code model away. The decision to implement one vs the other is a function of economics and trust. Do you trust the open source community more than you trust your vendor? Is the long-term TCO of open source greater or lesser than that of commercial programs? There is no right answer. Neither camp is 100% correct. It boils down to an individual's decision in specific situations.

But this concept of individual choice and responsibility brings to mind another trap that KaceyR falls into. The claim is that the only way to know if an open source product is safe is to personally perform a code review. While speciously correct, the argument neglects the power of community and the monitoring and reporting that the collaborative open source environment provides. If living in a vacuum, open source exploits -- theoretical or otherwise -- would have a field day. But the reality is that with millions of eyes in the open source community, such an exploit would not exist for long. One could be paranoid and demand a self-performed code review on every single open source app, but then I would expect such an individual to be testing their gasoline before filling their car's tank just to be sure it wasn't ethanol or diesel!

Lastly, KaceyR's legal liability hypothetical was a bit of apples/oranges. A checksum-validated commercial product was compared to a checksum-failed open source project. Let's turn that around and compare like items: a commercial vendor is no more likely to support a checksum-failed product than an open source provider is. As for the checksum-validated converse, while it may be technically true that legal recourse would be possible in the commercial situation, reality dictates that no-fault EULAs and the long legal process would render the point moot. Legal indemnification of commercial software serves mainly to bolster the trust-in-your-vendor argument.

Trust in whom you feel appropriate. It seems clear from the comments that KaceyR will live happily in the commercial world. Zealots, meanwhile, will run with Linux on their desk, Opie on their PDA, and Freevo on their DVR. The rest of us will continue to strike that balance of economics and trust.

Now what

The question of secure software is very compelling at first blush. Fear, uncertainty, and doubt are difficult obstacles to overcome, especially in organizations not culturally attuned to change. Tried and true may cost more over time, but they are a known quantity, bringing with them the comfort of predictability. But when you get to the core of the matter, what makes software secure; how much assurance and trust can you have in anything software related? Bugs, bad installs, and broken features are all just part of the experience for any IT professional in an enterprise setting. Things happen.

In the final analysis, it comes down to service and confidence. As an IT professional, when making any software purchasing decision, you must consider the quality of the service agreement (if there is one) and use it to assess your confidence in the overall viability of the software in question. This applies whether the software is proprietary, open source, or developed in-house.

As t_mehta put it in a discussion post:

  1. Trusting any software is a question of faith and testing--Can we trust any/ all proprietary software to be absolutely perfect--nope, same is true for OSS front--no difference here.
  2. Will finding whose responsible for a coding problem in OSS or in MS or Borland etc. solve the problem in my plate now - nope, you've got to wait for someone to fix it, who pays that guy is irrelevant - no difference here.
  3. If you are a corporate with service support--commercial OSS support and proprietary support is equally good--no difference here.
  4. If you chose to download & install software with no support identified in advance--e.g. freeware on MS or Linux--your support options are the same--NIL, if things go wrong its your funeral--no difference here.
  5. Legal recourses are the same if you have the same support agreements in place--no difference based on platforms (OSS/ Proprietary).

Overall, the argument put forth is axiomatic since in simple words it says proper service agreements are useful to customers irrespective of the source/ ownership of the software installed - that’s like saying 2 + 2 = 4, we all know it, so what’s the big deal--I don't get KaceyR's issue--lets compare unsupported Proprietary software with supported OSS software--give me the supported software any day, who cares about its origins.

I say, based on the lifespan you are looking for use whatever is functionally appropriate and cost-effective (over the lifecycle) IRRESPECTIVE of it belonging to the OSS world or proprietary--period.

I think this is really what it comes down to: The amount of code and sophistication of features incorporated into application software means that no program can ever be truly perfect. Due diligence will be required no matter how the software is acquired. The real debate between proprietary software and open source software lies not with the often politicized method of delivery, but with the features offered by the software itself. The really fruitful debate should be focused on the merits of the software, not what was paid for it and to whom.

About

Mark Kaelin is a CBS Interactive Senior Editor for TechRepublic. He is the host for the Microsoft Windows and Office blog, the Google in the Enterprise blog, the Five Apps blog and the Big Data Analytics blog.

0 comments

Editor's Picks