Developer

Bolted-on security features aren't secure


Jaqui Greenlees, a software developer, consultant, and former highly active member of the TechRepublic community, has been known about these parts for making some provocative statements at times. He is critical of JavaScript as a security risk to a degree that makes me look like a JavaScript cheerleader (and I'm no friend of the way JavaScript is typically implemented in popular Web browsers) — so much so that he stopped posting to TechRepublic at all back when TR's interface started requiring JavaScript. He was a vocal critic of all things Microsoft, for reasons of both heavyhanded business practices and security issues.

He's smart, opinionated, and bucks trends. He thinks about security a lot. Sometimes, he says something on the subject that catches me by surprise.

Now that you have the background, check out a weblog post of his, on his own website, titled Microsoft Breaking the law again?

Jaqui explains in brief detail how to verify for yourself, using Visual Studio, that Microsoft is harvesting information about MS Windows users and very intentionally suppressing some of Microsoft's security features. The following quote from Jaqui's weblog post summarizes the problem:

That one section of the file logs in as administrator, if you are not, turns off warnings, collects data from your computer, sends that data to Microsoft, then turns warnings back on and logs off as administrator.

This quote explains the suggestion that Microsoft may be breaking the law (again):

Then decide, is Microsoft committing the same criminal act they were penalised for by the US Courts with the Windows 98 Update issue of sending information to themselves when you ran windows update in windows 98?

Of course, WGA/MGA is specifically designed to send information to Microsoft for validation purposes, and there's nothing particularly hidden about that fact. As is often the case, I find myself not necessarily in agreement with Jaqui's take on a security matter (such as my somewhat milder views of the problems with JavaScript). WGA/MGA, complete with its explicit disclosure of the fact that it is intended to send system information to the Mothership, seems to me to essentially be a free pass for exactly what Jaqui discovered in the LegitCheckControl.DLL file.

From the Genuine Microsoft Software FAQ:

Q: What information is collected from my computer?

A: The genuine validation process will collect information about your system to determine if your Microsoft software is genuine. The validation tools do not collect your name, address, e-mail address, or any other information that Microsoft will use to identify you or contact you. The tools collect such information as:

  • Computer make and model
  • Version information for the operating system and software using Genuine Advantage
  • Region and language setting
  • A unique number assigned to your computer by the tools (Globally Unique Identifier or GUID)
  • Product ID and product key
  • BIOS name, revision number, and revision date
  • Volume serial number
  • Office product key (if validating Office)

In addition to the configuration information above, status information such as the following is also transferred:

  • Whether the installation was successful
  • The result of the validation check

As standard procedure, your Internet Protocol (IP) address is temporarily logged when your computer connects to a genuine validation website or server. These logs are routinely deleted.

On the other hand, the implementation of this feature of WGA/MGA behavior leaves something to be desired:

  1. A tool that logs itself into an account with administrative access, then turns off the system's security warnings system, constitutes a tremendous potential security threat — even if the tool itself is not malicious. The potential for abuse is a touch disturbing to consider.
  2. It's also interesting to note that the behavior of WGA/MGA is something that MS Windows' own security features would consider a threat, necessitating this temporary deactivation of the warning system. This strikes me as an unintentional indictment of the entire process of validation in this manner, and digital rights management systems in general. They are, in effect, legitimized malware — and here's a demonstration of the whys and wherefores.
  3. The fact that this sort of behavior is even possible — not merely as an overlooked bug, but as an intended part of the design of Microsoft's security features — constitutes a security risk of its own. It also starts one thinking about whether this approach to producing security alerts and "protecting" the user could even be designed to disallow such security risks at all. In other words, it's a strong piece of evidence of a principle of security by which I've lived for years: Bolted-on security is not even as strong as the bolts. Call it "Perrin's principle of integrated security" if you like.

A security feature differs from a characteristic of a secure architecture in that the first is bolted-on, and the latter is part of the entire design philosophy of the system. Virus scanners, carefully crafted warning systems that maintain definitions of "risky" behavior, and similar security measures are security features in this sense. Meanwhile, default system behavior by which files are opened or executed based on specific instructions rather than as determined by a three-letter filename extension is a characteristic of a secure architecture.

Note that there is a difference between "bolted-on" and "modular". I leave understanding the distinction as an exercise for the reader, and only mention it here as a cautionary statement for those who understand programming well enough that they might wonder if I have a problem with modularity in software design. I don't: quite the opposite, I tend to think most complex software systems are not modular enough, for purposes of security as well as other reasons. In fact, my next article will touch briefly on a benefit of modularity in authentication systems.

Suffice to say, for now, that all else being equal a bolted-on component of a system can be more easily circumvented, and not necessarily by attacking it directly, than a modular system that is an integrated part of the whole when attached. On the other hand, a system can operate without a modular component (though it may need a replacement), though a bolted-on component of a system may not be removable without crippling the entire system — or worse.

For many years, Microsoft Windows in all its evolving incarnations has edged closer to something it might call proper multiuser support. This is a good thing for MS Windows' overall security, and might one day provide real, effective privilege separation. It has taken a long time, though, in large part because MS Windows grew out of DOS — an intrinsically single-user (and single-tasking) system — and Microsoft has been loath to just throw out the entire system design to start over the way Apple did with MacOS X. There is surely little, if any, original DOS code still extant in MS Windows Vista, but the requirements of backward compatibility and the slow evolution of MS Windows over the years have ensured that, to some extent, user authentication is still a bolted-on security feature — or welded-on, perhaps, after so many years of increased integration.

The fact that WGA/MGA can circumvent the standard authentication process to behave in a manner so reminiscent of malware is a pretty clear indicator of how far MS Windows authentication systems have to go before they become an integral part of the system architecture — and, thus, something I might call "secure".

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks