Security

Security 101, Remedial Edition: Obscurity is not security

Chad Perrin reinforces his argument that obscurity is not security by defending open source security solutions against claims that it is inherently more vulnerable.

I know I've addressed this security issue before -- many times, in fact. Apparently, it needs to be said again:

Obscurity is not security!

Arun Radakrishnan wrote about how Red Hat decided to open the source to its security certificate system in TechRepublic's IT News Digest blog, in the article Does open sourcing security framework lead to more secure software?

In the article, he references not only Red Hat's announcement, but also a ZDNet post by Dana Blankenhorn wherein he decided to take on the "open source security meta-hole", as he calls it. His comments imply that, just by making the source code for a piece of software available, that software's security is somehow compromised. He fails to actually make a case for that line of reasoning (probably because it's based entirely on assumptions, and not at all on any actual understanding of principles of security and software design), but he does link to an article in ZDNet UK that discusses the uninformed security concerns of Australian Taxation Office CIO Bill Gibson (not to be confused with speculative fiction author William Gibson), and the open source community's reactions to his expression of those concerns.

In a ZDNet Australia interview, Bill Gibson said:

We are very, very focused on security and privacy and the obligations that we have as an agency to ensure that we protect those rights of citizens' information in that respect. So, we've continued to have concerns about the security related aspects around open source products. We would probably need to make sure that we will be very comfortable -- through some form of technical scrutiny -- of what is inside such a product so that there was nothing unforeseen there.

There are basically three different types of people telling the world what to do to ensure their computing environments are secure, in my experience:

  1. There are truly knowledgeable security experts such as Bruce Schneier and Phil Zimmerman, people who articulate security principles for the rest of us to help us understand how best to protect ourselves, and who develop legendary security solutions like the Blowfish cipher and PGP. These people universally understand one of the most basic, important principles of security -- Kerckhoffs' Principle -- which states that a cryptosystem should be secure even if everything about the system except its key is public knowledge. A reformulation known as Shannon's Maxim states:

    The enemy knows the system.

    The lesson to take from this is simple -- the effectiveness of your security policy should not depend on the secrecy of the policy, because it can always be discovered or reverse-engineered. These are the security experts who understand the value of peer review. They tend to understand that the benefits of security through visibility are far more important than any unwarranted fear of losing the obscurity of the system.

  2. There are those supposed security experts who, regardless of whether they understand Kerckhoffs' Principle, exhort others to use systems whose implementation details are kept secret. The justification is that this secrecy somehow reduces the likelihood of someone being able to crack the system by examining the implementation details. These are people who are typically either plagued by a conflict of interest (they want to sell closed-source software, but can't sell it if they're telling people their software would be safer if it were a popular open source project) or not nearly as knowledgeable as they thought.

  3. There are, finally, people who hear some security-unconscious CIO's uninformed statements in an interview and run with it, without bothering to actually read up on the subject at all.

It's harsh, but it's true.

Educate yourself. Understand that hiding the implementation details of your security system doesn't help anyone but the "bad guys", because it prevents the "good guys" out there in the general public from helping you improve the system -- but the malicious security crackers will use the same reverse-engineering, vulnerability fuzzing, and stress-testing techniques to find chinks in the armor that they always use. Only the most obvious security issues in an implementation (like a complete lack of input validation in a typical Web application) can be found very easily by looking at source code, and any errors that simplistic can be found in moments by way of other techniques.

Not only does open source software provide for a development process more likely to result in secure software, but it also places security software like GnuPG, Nessus, ClamAV, OpenSSH, WinSCP, and PuTTY in the hands of people everywhere who might otherwise never use them. Open source software is near and dear to my heart, as a security professional interested in helping as many people as possible better protect themselves from the malicious security crackers (and unscrupulous, privacy-invading corporations) of the world. Because of that, I tend to get a little annoyed when people spread such nonsense as the notion that open source software is somehow inherently less secure.

Sure, maybe I'm biased, but in this case, it's because I value actual security over the mere illusion of it.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

75 comments
sarah.kahler
sarah.kahler

Great article! Your comment that "the effectiveness of your security policy should not depend on the secrecy of the policy, because it can always be discovered or reverse-engineered" is interesting. The control standards, or rules by which individuals must comply with the security policy, may vary from company to company depending on the regulations for their industry. These regulations or authoritative sources may limit how much companies can enable "security through visibility".

CG IT
CG IT

I'm not sure I agree with the open source vs closed source argument about security. In effect this is a Linux/Unix/Open Source OS vs Windows argument and which one is more secure. It is my belief that since Windows runs on about 90% of the worlds computers and that's a heck of a #, hackers/crackers/criminals aren't going to target the 10% of the rest of the world that doesn't run Windows to do what hackers/crackers/criminals do. That is unless that 10% represents 90% of computers that have access to $$. As many here have said, for every security measure security "experts" come up with, there will be the hackers/crackers/criminals who will make $$ trying to overcome it.

Wunderbarb
Wunderbarb

I am not sure in which category you will put me after having read my comments. First of all, Kerckoffs law is awfully right. I am a strong advocate of it (see the security laws my team use every day at http://eric-diehl.com/loisEn.html) Nevertheless, open source is not necessarily the answer in every case. In some cases, it may be even the bad answer. It depends on the trust model of your system. I will take an example: OpenSSL. The trust model of SSL is that Alice and Bob are trusted and they want to avoid that Eve spies them or tampers their messages. Thus, OpenSSL uses cryptographic algorithms. The OpenSSL cryptographic toolbox is well studied and perfect. But under the above mentioned trust model. Let us now suppose that I have to design a software that will run on Bob's computer and I do not trust Bob. I want to keep some information under control by ciphering them with a secret key and delivering them when I want. In that case, using an open source toolbox is not appropriate. Bob will know the source code and will know how and where the secret key is used. In other words, Kerckoffs law cannot be fulfilled because Bob will have access to the key. Open source is perfect if the trust model of the system assumes that the "owner" or "operator" of the corresponding software is trusted. This is not always the case.

steve
steve

The "open source is unsecure" in my experience is a convenient "monster in a closet" statement by CIO's and others who really don't want to use open source because they are afraid of losing intellectual property. My last employer used the same ruse to squash the use of some tools such as Eclipse and even Java! Turned out asking our legal dept (whose head used to be part of the counsel at Microsoft) that the real reason was the supposed "cancer" that everything that open source touched would now be open to the public, losing any IP rights etc. So when I hear that open source is not secure, that is just a code for "we really don't understand how open source works" and if you want buy in from the top for such a concept just mention security risk and whatever project you were hoping to use open source will be squashed...

Michael Kassner
Michael Kassner

I think you may have coined a new phrase "security-unconscious" and it is appropriate. Open source anything will always be the "dark side" to control freaks as they do not understand the value of peer review.

Jack
Jack

Our network admins won't give out VPN client software because they consider that a security risk. Nobody told them the client is a free download from the vendor. Of course, you do have to configure it by yourself. They don't seem to understand that a valid login account is the only key to the kingdom.

Jaqui
Jaqui

then MS products would never have had a single exploit. after all, if being open source was a higher risk for exploits, then the relatively few exploits in all stable release versions of open source software would mean that the proprietary software would have to have zero exploits for security through obscurity to be correct. instead, we see far more exploits for proprietary software, which is using the security through obscurity model.

apotheon
apotheon

Do you think I'm off-base? Do you have a credible argument for keeping implementation details secret for security purposes?

apotheon
apotheon

"[i]The control standards, or rules by which individuals must comply with the security policy, may vary from company to company depending on the regulations for their industry. These regulations or authoritative sources may limit how much companies can enable 'security through visibility'.[/i]" Many regulations are very poorly conceived -- often created by IT workers with no security expertise or experience, salesmen, and even politicians who need help in the morning to boot up their computers. Sometimes you just have to do your best to work around the limitations placed on you by such regulatory compliance requirements. Of course, many such security regulations are effective at improving security, if only by a minuscule measure -- but some are directly counterproductive for improving the security of a system as well.

Neon Samurai
Neon Samurai

The discussion may devolve into a posix architecture vs windows architecture argument but the starting point is really good security vs bad security practises. Example; the cryptography scientists have stuck to remaining open and transparent. Like all science, peer review is very important. I can say something is secure and works as easily as I can say that my experiment findings prove my hypothesis. If no one else verifies the validity of my claims then they are just the noise of one individual. If the experiment and results are replicable by all other scientists then my claims are validated. In cryptography, peer review is as important. Anyone can dream up some complicated looking algorithm and say it's secure but you need peer review to truly test it and improve it through evolution. Forget open or closed software. This is open or closed processes. It is OS agnostic. Cryptography works on all platforms. Good and bad security practises are available on all platforms. It just takes more effort to secure some platforms compared to others. That's why I agree with the idea that security should be transparent. If the process is not truly hardened then who gives a whiff what authentication token the user has since the process it self can be used to get around that token, password or biometric. (edit): hehe.. I replied already a while back.. what can I say, I'm on vacation with a nice tall glass beside me.

Neon Samurai
Neon Samurai

Unix like systems seem to have the majority in the server market but I don't have number on how many are breached versus Windows Server systems. The webserver market is another good place to look with BSD and Linux based OS still leading over IIS. As for reaction time when a breach does occure, look at security update stats. The average package update time is within a day versus other companies at six months or so. If you have to react, it's nice to have the platform that seems to do that in a shorter time. In the home market, Windows is an easy target. There is more of it with a lower average user knowledge level. Really, the money is in the social engineering scams and they work on either platform so that's not a great indicator. With that huge user base you would also expect Windows to be pretty solid by now. It does have 90% of the market for beta testers.

Jaqui
Jaqui

riight. Mac's have 9%, or was the recent stat 12%? Linux has almost 40% in the server room. [ where Security is even MORE important than the desktop ] and, since there are zero accurate counts of Linux Desktops possible with people being forced to pay for windows when they are just going to rip it off and put linux on the system we can estimate that Linux Deskops have 50% of the server number, which would be 20% so at best, without counting any of the other operating system, Windows has 71% not 90% edit to add: and since Apple is open sourcing OSX, Windows is the only proprietary, closed source operating system, so any open source vs proprietary is going to be open source vs windows, with only windows being proprietary. [ hmm, only one os isn't open source, the other proprietary os have gone open source, maybe MS will clue in and do the same ]

Neon Samurai
Neon Samurai

The first example is a basic asynchronous encryption; bob has his private key and alice's public key where alice has bob's public key and her own private key but eve has neither public or private keys. Everybody has there secret and it is kept separate from the encryption mechanism. Anyone can know the full details of the mechanism without being able to break into it without the secret. The second example seems to be a hard coded secret within the receiving software at the user's location where only the developer may send packages to the software. This sounds much like the current use of certs in the Linux kernel to validate code being run. Your example seems closer to the synchronous mechanism where the same secret can open or close the encrypted package. So, the idea is that visible source in the first example is not an issue since asynchronous keys are being used. In the second it is an issue since someone could simply see the hard coded secret used by your program to unencrypted packages. I'd say that the rule still holds true; as the cryptography people figured out long ago, visible is better. If a mechanism can not be fully visible except for the secret key and keep someone out of it then it's not really secure; but the emperor's robe does look lovely. If your program is closed source then you are relying on the obfuscation of your hard coded secret key. It can't be changed after the software is handed off to someone new. It can easily be reverse engineered (a la game cracking techniques). This is about as security conscious as the websites that used to have a ?please enter password to see website? with the password coded in the raw html. Even with it ?obfuscated?, there is still little work involved in reversing it. The chosen trust model would be a problem regardless of the development method and license of the software. I?d look into how other update systems are doing it based on certs rather than relying on the myth of safety closed source. I thought the very reason for the science of cryptography being completely visible was to allow peer review and identify false security. Anyhow, this is one of my favorite topics so where am I misunderstanding the two example given?

Jaqui
Jaqui

Yup, it is. because the open source security model does not trust anyone who does NOT have the root login. what's that? you mean you are going to grep an encrypted file to get the key to unlock another encrypted file? I don't think so. encryption keys are NOT stored in a human readable file at all by gpg. just like unix, and unix like operating systems do not store user passwords in a human readable form, they are encrypted. [ kind of like what MySQL does with a password field in a table. ]

apotheon
apotheon

"[i]Let us now suppose that I have to design a software that will run on Bob's computer and I do not trust Bob. I want to keep some information under control by ciphering them with a secret key and delivering them when I want. In that case, using an open source toolbox is not appropriate. Bob will know the source code and will know how and where the secret key is used. In other words, Kerckoffs law cannot be fulfilled because Bob will have access to the key.[/i]" I'm afraid that, as written, your statement doesn't make much sense to me. Please explain what you're trying to say with the above. The only think I can figure you might be saying is that open source software is not appropriate for DRM. Is that what you're trying to say? If so, I recommend you read [url=http://blogs.techrepublic.com.com/security/?p=363]an older article[/url] of mine that discusses DRM. Then, consider that producing software under a closed source model to hide encryption keys while still distributing them to a user didn't do much to help the makers of HD-DVD much good -- the AACS key has been cracked over and over again. If that's not your point, I guess I just didn't get your point. If it is, making the software closed source won't help.

Dumphrey
Dumphrey

In a closed source model, when an exploit is found, its most likely by either 1)a cracker or 2)A hacker looking to make some money on the bounty. In case 1, it could be months and months before the exploit is caught by someone else, and a moth or so more before its patched. In case 2, it will be months and months before its patched because its all "academic" with no "real world application". And increasingly with Apple products, situation 2 is not very likely.

robo_dev
robo_dev

If you publish the blueprints to the lock on a bank safe, does that make it more or less secure? Less, of course. Now should you RELY on keeping the information secret as a security control? Of course not.

Sterling chip Camden
Sterling chip Camden

Obscurity seems secure, because it's "secret". But as you point out, it could only cover blatant security flaws that could be discovered by other means. If code is truly hardened against attack, then publishing it doesn't make it any more vulnerable. If anything, crackers who look at it should be discouraged and move on to more vulnerable targets.

MarkGyver
MarkGyver

"Apple is open sourcing OSX" Could you give us a link to your source that they are making OS X open source? I thought that it was only the Darwin kernel and a bunch of standard GNU utilities.

Wunderbarb
Wunderbarb

I hope I explained it better on my blog (http://eric-diehl.com/blog/index.php) using the DRM example. Hiding a key with a pure software implementation is a quasi impossible task is true. In the case of AACS hack, it was not hidden at all. The job was badly done (see http://localhost/eric/letter/Thomson%20Security%20Newsletter%20Spring%202007.pdf) Some techniques may help to make the code more difficult to reverse engineer but it will never block the hacker. It is only a question of time (By definition, hackers will always find their way). Now, for the sake of discussion, lets be a little bit more theoretical. Kerckhoffs law states that the security should rely on the secrecy of the key and not the security of the algorithm. Great law. As long as Alice and Bob trust each other, there is no problem if we assume that Eve has not a way to access or spy Bob's computer. In this case, open source is perfect. Now let us change the trust model. First case, Bob is not anymore trusted. If we use open source and his computer then Bob will be able to access the secret key, game over. Second case, Eve has access to or control Bob's computer, although Bob is trusted. Eve knowing the open source code, she will know where to look for the secret key. Game over. Thus my statements are: 1- Kerckoffs law is right and we should avoid security by obscurity 2- Open source works only if in the trust model the principal operating the open source software is trusted (and also its environment). Trust model is extremely important in any security design. 3- Implementing correctly security is an awful complex task, open source helps. Is it possible to hide a key in a software implementation? This is another question that I will not tackle here. One interesting direction is whitebox cryptography. I am afraid that I was too long. #;-)

Jaqui
Jaqui

according to the publicly accessible records of companies like Secunia, MS has an AVERAGE patch time of 6 months, AFTER they have been notified of an exploit. All major Open Source products average closer to 24 hours for a patch

Neon Samurai
Neon Samurai

duplicate post removed - TR really needs to fix it's web cluster

Neon Samurai
Neon Samurai

I'd go the other way and have as many people review those vault blueprints as possible. I'd keep the key secret but open the mechanism wide for peer review. If your vault blueprints are secret and only your one safe maker has looked at them; is it really using a secure process or does it only feel secure? If your vault is truly secure, you should be able to hand the blueprints too anyone and have them still not be able to get in without the combination.

apotheon
apotheon

"[i]If you publish the blueprints to the lock on a bank safe, does that make it more or less secure? Less, of course.[/i]" Publish the blueprints. Get feedback on the security of the design. Improve the design. Wash, rinse, repeat. You'll end up with a more secure lock. Thus, the answer to your question is simple: "More, of course."

Tony Hopkinson
Tony Hopkinson

Blue prints for the key to a particular safe, would be the equivalent of writing your password on a post it and sticking it on the monitor. They key hole is here, well that is like a password dialog box comes up. I mean like you could have guessed that was going to happen. So no, it's far from obvious.

tuomo
tuomo

Obscurity may be secure as long it is in your head, and in your head only. Implement it and it is already "public". Can't add anything to your answer but just hope that this obscure idea of obscurity (heh!) gets through. Unfortunately - not easy, I find every day companies still arguing it, actually seen it now 20+ years and it gets a little old!

Jaqui
Jaqui

the darwin project is the only part of osx that is open source to date. but, the cocoa widgets used for the ui effectively is. [ and objective-c based ] and darwin project, is really the os. everything else is the bloat eye candy that shouldn't be in there any way.

Tony Hopkinson
Tony Hopkinson

Agreed. Unfortunately the erm compromise Eric seems to be putting forward, (he's strangely unwilling to give a concrete example), might be the only pragmatic solution. His objection to open source is demonstrably incorrect because in order to make open source a flaw, he has to pick a scenario for which the only security is obscurity. In other words hiding your front door key under the doormat....

apotheon
apotheon

"[i]Or to be more precise, your vision of DRM seems limited to B2C applications. There are many other fields of use of DRM in B2B applications.[/i]" If someone is sticking software on someone else's system that is meant to wrest control of some part of that other's system away from that other person, it's malware. Period. "[i]In this context, I would like to know what is your ethical feeling about game protections for instance on game consoles.[/i]" The owner should have control over his own resources. Pretty simple.

apotheon
apotheon

[b]eric.diehl:[/b] "[i]Of course, keys are not in the source code. But the executed code does handle them at one time. And here they are exposed.[/i]" That's not a security risk, because you can't just go changing the code of a running system, especially when you don't actually have access to it. The person running the security system is the trusted party -- not the person who has to authenticate with it (at least until he/she has authenticated as the person running the system). You seem to be thinking in terms of something like a webpage with a login fuction that is embedded in the page via JavaScript. That's terrible, awful, horrible security practice. The authentication should be on the server, where the person authenticating can't change the running code to bypass the authentication system. "[i]Every body assumes in this thread assumes that Bob is trusted and so is his environment. In that case, yes open source is the best. You do not care, that keys are exposed to Bob[/i]" You still appear to be missing the point. 1. The keys aren't exposed! They aren't in the source code. If they're in the source code, you [b]did it wrong[/b]. If there's any way for someone to get the software to do something with the keys without entering them (and thus already knowing them), you're [b]still doing it wrong[/b]. The keys aren't even accessible to a well-designed system until they're entered by someone who has the keys. Period. 2. The person running the security software is trusted. The person trying to get the system to authenticate him isn't, at least until he's authenticated. Insert Bob into this wherever you like, and call him "trusted" or "untrusted" depending on where you insert him. "[i]But at least for the sake of discussion, let us assume that you want to run a software using keys on an environment controlled by Bob who you do not trust. You do not want Bob to access these keys. At least, I would say, not easily access them...[/i]" If it's Bob's system, [b]he[/b] is the trusted party, and [b]you[/b] are not. That seems to be your biggest problem with understanding what's going on. If you're inserting stuff into Bob's system that restricts his ability to use the system in some way, [b]you're a malicious security cracker putting malware on his system[/b]. Are we clear yet? [b]Tony Hopkinson:[/b] "[i]I agree that in such an instance depending on how easy it would be to reverse engineer the app, that open source does increase the security risk.[/i]" That's only true if you redefine "security" in a manner that completely hoses up any reasonable use of the term. You're allowing eric.diehl to use some kinda newspeak or doublespeak nonsense to redefine the term "security". It's like war is peace, slavery is freedom, and malware is security software.

Tony Hopkinson
Tony Hopkinson

Bob could change the code and comment out the requirement to have the key and re compile? Or similar.... So in real terms bob was never trusted, what you are trusting is your code. I agree that in such an instance depending on how easy it would be to reverse engineer the app, that open source does increase the security risk. However whether such a solution could be considered secure in real terms is very much open to debate. I certainly wouldn't use it for anything I considered worth putting some effort into securing.

Wunderbarb
Wunderbarb

I am afraid that we do not share the same vision on DRM. Or to be more precise, your vision of DRM seems limited to B2C applications. There are many other fields of use of DRM in B2B applications. In professional environment, the notion of privacy is different and the notion of control of resources is different. And the risks still here. DRM is not necessarily evil. It depends what it protects and the context. In this context, I would like to know what is your ethical feeling about game protections for instance on game consoles. I do not speak of the impossibility of succeeding #;-) I know your position on this point, and I share it. It is my law 1: Attackers will always find a way.

Wunderbarb
Wunderbarb

Of course, keys are not in the source code. But the executed code does handle them at one time. And here they are exposed. Following all these discussions, I understand where the problem of misunderstanding occurs. Every body assumes in this thread assumes that Bob is trusted and so is his environment. In that case, yes open source is the best. You do not care, that keys are exposed to Bob But at least for the sake of discussion, let us assume that you want to run a software using keys on an environment controlled by Bob who you do not trust. You do not want Bob to access these keys. At least, I would say, not easily access them...

Jaqui
Jaqui

the KEYS are not in the source code at all. there is ZERO possibility of getting the keys from the source code. they keys are never in the source code. not in ANY application worth using.

apotheon
apotheon

Never grant authorization to someone you cannot trust with it. That's the proper relationship between trust and authorization, period.

apotheon
apotheon

You actually presented two entirely different situations as though they were analogous. 1. Someone -- let's call him Jerry -- owns a computer. Bob has a user account on it, but only Jerry has root access. Bob's access is limited. Jerry's is not. This is because, it being Jerry's system, Jerry has the right and responsibility to secure the system against unauthorized use by Bob. Anything Bob does that is not authorized is an intrusion, an attack on system security. This is because, as the owner of the computer, Jerry's security concern involves maintaining his privacy and control over his own resources. 2. Bob owns a computer. In this case, Jerry is a content provider, and he provides content with DRM. Bob is the one with the security concern here, the right and responsibility to protect himself -- to maintain his privacy and control over his system's resources. Jerry's DRM, being an attempt to remove some control of Bob's resources from Bob, is the intruder, and his DRM should be regarded as an attack on the security of Bob's system. The key in recognizing the difference here is two-fold: 1. Someone owns the system on which the software is running. That person has the right and responsibility of securing the system. 2. Software is nothing but self-enforcing policy. It is not property, because it doesn't exist when it's not running. Saying software is property is like saying running across the street is property -- the "running across the street" doesn't exist when you're not doing it. Sure, the body that runs across the street belongs to someone, but the actions performed by the hardware are not property. They're just actions. As the owner of the system, someone has the right to decide how it shall be used, and by whom. In other words, the owner has the right to determine what actions shall be taken with the system. Any actions taken to violate that right, including attempts to limit that right via DRM, are attempts to crack security on the system. The limited user account on someone else's system being misused to attempt to gain unauthorized access to restricted functionality and the DRM are the analogous ideas. The root account on (and ownership of) the system is not analogous to DRM; it's not even close.

apotheon
apotheon

"[i]For the first case, it is what is currently done with DRM. And yes, with software it is an impossible task to hide keys. The best you can do is try to make it a little bit difficult for hackers. The more clever approach is whitebox cryptography that has limitations and is not yet satisfactory.[/i]" You've been talking about trust models elsewhere. I think that's relevant here: The (technical) problem with things like DRM is that it's designed to limit one's access to one's own system. Security is about privacy and control of resources -- and DRM is a means of some outside party reaching into one's system to take away some of that privacy and control of resources. That means that, by definition, DRM is an attempt to violate security. 1. My concern is for people who want security -- not for people who want to violate it. As such, my concern is for the people who own the systems, and not for the people who want to install DRM on those systems. 2. Trying to apply principles of security to enhance DRM and similar "solutions" (kind of a paradoxical use of the term "solution", since DRM and the like are really [b]problems[/b]) is like trying to apply principles of ethics to wanton mass-murdering behavior. You can't make murder ethical, and by the same token you can't make DRM secure. DRM is an attempt to exploit vulnerabilities, not to secure privacy and resources. "[i]For the second case, it is another problem. As you rightly stated in your post of 1st April, keys will be in memory at execution time. This means also that Bob's private key will be in the memory if performing mutual authentication. Thus, we assume that Eve has no way to observe the memory[/i]" This is why Bob should never enter his private key on a system he doesn't control -- a system that he cannot secure. Once Bob starts allowing DRM onto his system, he has lost the ability to effectively secure the system.

Neon Samurai
Neon Samurai

What the heck are you doing puting secrets on bob's computer? If it's Bob's computer then you are the untrusted third party not Bob. Bob has a right to know what your feeding into his OS even if he also has to be told that he shouldn't touch it. If I'm Bob and you slip secrets into my computer then you'll be treated like any other malware writter. I think the secret your trying to hide is where the problem in the security process exists in this case. Either your trying to do something in a way that feels safe but isn't or your trying to sneak something into Bob's computer that he's not allowed to touch. If I'm bob, I'm going to have a long list of questions about what it is exactly that your doing. If you have a specific exampe, it may be worth detailing it instead of random examples. You may even get a better authentication process out of jamming the idea with the other geeks here. I'm actually very curious to know what the problem your trying to solve is and if there is a better way someone smarter than either of us can dream up.

Tony Hopkinson
Tony Hopkinson

it is not necessarily Bob that is untrusted is it. That fact that Bob is working in a sandbox should be completely irrelevant to the design of the function bob is running. Even more germane to this fallacy you are trying to promote, under Linux for instance Bob is a sandbox. As for your three day scenario, yet again you appear to be falling into the trap of coding your 'security' intead of securing your application. After three days the authorisation should be revoked. The software should not check to see if bob has the permission on his local system, but check with Jane to see if he has permission. Trust is an assumption, it's always wise to check them every time they come into play, they are often wrong.

Wunderbarb
Wunderbarb

I do not disagree at all. First of all, you are right for Kerckhoffs' law. I was definitively tired. never enter in such discussion at the end of day (this was the case in France at the time of writing #;-) ) For the first case, it is what is currently done with DRM. And yes, with software it is an impossible task to hide keys. The best you can do is try to make it a little bit difficult for hackers. The more clever approach is whitebox cryptography that has limitations and is not yet satisfactory. For the second case, it is another problem. As you rightly stated in your post of 1st April, keys will be in memory at execution time. This means also that Bob's private key will be in the memory if performing mutual authentication. Thus, we assume that Eve has no way to observe the memory

Wunderbarb
Wunderbarb

"if you are not Bob or Root on his computer". It is exactly what I am stating. If for some cases, you do not trust Bob, meaning you do not want some secrets, available in some form on his computer, to be accessed by him. Or if you want to avoid that a rootkit reads this secret. The trust model is that Bob and his environment is trusted. This is not necessarily the case. Example: professional DRM, ...

Wunderbarb
Wunderbarb

Trust model is not a joke. The trust model is the set of security assumptions on which you build the security of a system. For instance, that to open a safe it will require five hours. In that case, you know what the detection and reaction time. In the case of OpenSSL, it is clearly that the sofwtare is operated by somebody who will not cheat. If the trust model does not fit in the environment, in other words some assumptions are wrong, then your system is in danger (already from the conception point of view, I do not even speak about implementation). I would recommend reading the book "R. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, John Wiley & Sons, 2001". Anderson is of category 1! Trust does not mean authorized. A simple example: you may authorize Bob to execute a program but you do not trust him, thus you put his software in a sandbox. Part of the trust model is that a sandbox is able to isolate Bob. Thus, my argument is that most of the time, open source trust model assumes that the operator of the open source code is trusted and its environment also (there are many more assumptions, but out of the discussion). This is most of the time true, but in some cases, it is not true. An example is DRM (in order to avoid the flurry of comments about DRM sucks, let only speak about DRM in a B2B environment where there may be legitimate uses for controlling a piece of information). Alice wants to send a document that he should only be able to view three days on his computer. She does not necessarily trust Bob. Would the DRM/reader be open source, what would restrict Bob from modifying the source code that is enforcing the rules? What would restrict Bob from extracting the keys used by the open source? Trust model is fundamental is security. Unfortunately, too many people (and specially designers) forget it.

apotheon
apotheon

"[i]Now let us change the trust model. First case, Bob is not anymore trusted. If we use open source and his computer then Bob will be able to access the secret key, game over. Second case, Eve has access to or control Bob's computer, although Bob is trusted. Eve knowing the open source code, she will know where to look for the secret key. Game over.[/i]" It sounds like you're talking about embedding a secret key in source code. Entirely regardless of whether the software is open source or closed source, the proper term for that practice is "stupidity". I just wrote an article about that practice, inspired by you: [url=http://blogs.techrepublic.com.com/security/?p=435]DRM and unintended consequences[/url] "[i]Is it possible to hide a key in a software implementation? This is another question that I will not tackle here. One interesting direction is whitebox cryptography.[/i]" So far, the answer is "no". By the way, I should address something you got wrong: "[i]Kerckhoffs law states that the security should rely on the secrecy of the key and not the security of the algorithm.[/i]" That should say "the secrecy of the algorithm". Further, it's not the secrecy of the key that is the point of Kerckhoffs' principle. A better paraphrase would be: "The security of an algorithm should not rely on its secrecy."

Neon Samurai
Neon Samurai

Why would Bob using an OSS based platform make his secret key visible? I'm not understanding why being able to read the source code somehow tranlates to being able to read the secret key or even reach it if your not Bob or Root on his computer. I'm missing the connection between visible source and visible secret keys somehow unless you really mean coding a password into an if statement. Anything else I can think of would use visible source referencing nonvisible certs.

Jaqui
Jaqui

http://localhost/eric/letter/Thomson%20Security%20Newsletter%20Spring%202007.pdf will only work for you.

Tony Hopkinson
Tony Hopkinson

Open source doesn;t help you secure a variant of this? if myPassword == "s3cr3t" { DoMySecretStuff(): } I'm hoping for a transaltion difficulty, but you keep talking about Bob getting eth key from the code and that is a tad concerning. The trust model is a joke, all it is, is a redefinition. Authorised() = Trusted() It's purpose is to make certain purveyors of security look better in the market. If Bob has the key to do X he's trusted to do X. If Bob is trusted, he can do anything.

Neon Samurai
Neon Samurai

That seems to be the magic combination. IE6 is the key though. IE6 + Flash usually runs well but I find websites that turn the combination flakey. Now, I also break Excel weekly which I'm also blaming on IE6 poor memory management. The issue there starts in Excel but affects IE windows also until all browser and Excel windows are closed and reopened; that sounds like not properly managing and releasing memory too me. I've never had the issue with IE7 or Firefox and flash in websites. It just really sucks to loose a string of five or ten waiting articles and forums because some random over-use of "rich media" sent the browser into a core dump.

$$$$$$$$$$
$$$$$$$$$$

Where I use Flash at all, it's stable.

Neon Samurai
Neon Samurai

If I had one thing that CNet needs to fix, it would be that. I was a regular reader there until browser crashes drove me away. I'm limited to IE6 when I have time to read so I'd get a bunch of windows open to pages and close them as I worked my way through. Inevitably, one of those IE windows would freeze and take down the rest of them. when it became one of every three clicks, I left. TR's pages do the same thing but far less frequently. I'm guessing that it's due to flash since there is less flash on TR's pages and other flash limited websites don't have the issue at all. FF never crashes due to this issue but when I have FF infront of me, I don't always have time to read; or I'm home with more interesting things involving my own machines.

$$$$$$$$$$
$$$$$$$$$$

santeewelding, if you Google "apotheon + externalities," or just read the "not inconsiderable" body of his work within techrepublic.com, you'll find he has already answered many facets of your questions. Enough specific comments, I would say, to satisfy your curiosity, if that is as general as you imply: [i]The larger frame entails, as you acknowledge, "social factors". You mention game theory and economics. I see other factors, too. One I see in reversion is structural. That is, in the security of the larger frame, we are jostled by two immediates. If, Aspergrer-like, looking for "specifics...finer edge...exactly..," and a certainty of decision, how do we escape now a tyranny of the public and the private? Why, to a still larger frame. Got one in mind? How large? How without reverting Parmenides-like to the faux certainty of one in favor of another does one escape tyranny?[/i]

santeewelding
santeewelding

I admire your escape from the tyranny of the immediate and the particular with respect to security. You impose at the same time your own tyranny, or control, of those immediates and particulars. That, as opposed to having coupled yourself to them, being jostled Brownian-like by them, and raising a partisan fist in salute of one damn proximate or another. You escape without relinquishing control. You accomplish this by imposing a larger frame, that of the public in relation to the private; transparency in conjunction with obscurity. It is the openness of the public, while not losing the impetus of the private, that intrigues me. It is the hiding of the cryptic in plain sight. The larger frame entails, as you acknowledge, "social factors". You mention game theory and economics. I see other factors, too. One I see in reversion is structural. That is, in the security of the larger frame, we are jostled by two immediates. If, Aspergrer-like, looking for "specifics...finer edge...exactly..," and a certainty of decision, how do we escape now a tyranny of the public and the private? Why, to a still larger frame. Got one in mind? How large? How without reverting Parmenides-like to the faux certainty of one in favor of another does one escape tyranny? And, yet, without losing our impetus, would that it survive. Sorry to have loaded your plate with so much (food) for thought, but you did ask.

apotheon
apotheon

Please put a finer edge on the question, then, so I know what exactly you're asking.

apotheon
apotheon

I take it you didn't get the part about improving the design. You must not have read my previous post closely enough. Try again.

Dumphrey
Dumphrey

if your safe is hiding its weak aluminum parts, then its not secure, as a good external explosion could shatter those parts allowing unrestricted access. The only reason to do this would be lazy and cheep. Little work for high profits. Why would it all not be made from hardened steel?

robo_dev
robo_dev

would the blueprints make my chance of success lower or higher? I would vote for 'higher' the blueprint would tell me where to drill and might indicate whether the parts are made of hardened steel, titanium, or aluminum.

santeewelding
santeewelding

Your deadpan recitation of fact after fact and matter of fact would give pause to God himself. I missed that piece (4/25/06), having signed on to TR some months later. Had it been familiar to me, I would have sharpened my question, which still abides, but to a lesser extent. Lesser, but not vanishingly so.

apotheon
apotheon

Your statement is absolutely correct.

santeewelding
santeewelding

Which leads to the management of publicity and all that which it entails -- not that which privacy entails. Would this change, enlarge, or otherwise disturb the identity of security/privacy in some possibly fruitful way?

ozi Eagle
ozi Eagle

Hi, Publishing the blueprints to a lock would not make it less secure, it only shows how the lock physically works. To reduce the security you would have to publish the details of the KEY to that PARTICULAR lock, which is what the article said. Herb

Tony Hopkinson
Tony Hopkinson

who wants to placate everyone and not be seen as stroppy. It's very hard to be myself when I run in to utterly stupid code. here's one I ran in to today if (year > 1997) { switch (year) { case 2000:{...} case 2001:{...} case 2002:{...} } } Code review / policing main function is to end up with a comprehensible code base, double takes like the above are just f'ing annoying. You are reading through and something like this just derails your entire train of thought. It really really p1sses me off.

$$$$$$$$$$
$$$$$$$$$$

Good fill-in-the _____ see "peer review" as free (price) verification of our work. Crappy workers see "peer review" as "code police." In other words, if you don't have anything to hide, you won't mind me inspecting your source code without a warrant.

seanferd
seanferd

If all the locks are keyed the same. The whole idea behind real security is to not have a weak point to discover.

Tony Hopkinson
Tony Hopkinson

box. To me it has very little to do with open or closed source. Recon, is of course a key thing but in any non trivial piece of software it's far easier to use reverse engineering and vulnerability tests to find holes than pouring over the source code. That's what you have to do to fix 'em. :p Open source gains it's advantage in that in working on the code a developer might discover a flaw en passant as it were. If it's useful code, more developers are going to use it so your chances go up. Peer review in these terms isn't code police, though there is an aspect of that to open source (and good closed source as well). It's I used this in a way you hadn't thought of and this happened....

robo_dev
robo_dev

is reconnaissance. The confusion of this discussion is that there are two issues here: The first issue is the old concept of 'security through obscurity' implies that the risk is less since nobody knows the vulnerability exists. And obviously, that is foolish since there are a whole lot of pretty smart criminals out there. The second issue is the argument for 'full disclosure' of any security flaws. This is a more complex issue, obviously. The hard-headed ones say 'damn the torpedoes, and tell the world when there is a vuln discovered' as some ham-fisted weapon to embarass and cow vendors into fixing things pronto. (And some vendors seem to only react to this sort of pressure). The more reasonable approach, obviously, is to take a more responsible approach, such as publicly releasing the information about a vulnerability, but not the exploit itself.

Tony Hopkinson
Tony Hopkinson

might look for a call to the api dialog,to see if some daft arse had done if edit1.text == "password" near it. It's a point but there are other ways to get that info and if your 'security' relies on the location of the drillhole being secret, you are fked...

robo_dev
robo_dev

to insert the plastic explosives. So my example sucks, I'll agree with that A rather humorous issue happened a couple of years ago involved the photo that Diebold put on their website for replacement keys to open voting machines (which are all keyed the same). The photo was of the actual key, and a 'security researcher' made a key based on the web site photo, and it worked!!

apotheon
apotheon

Publishing the blueprints of a lock is just like making the source for an encryption application available. If you don't know anything about how locks are designed, that might make it sound like open source software is bad for security -- but I have friends who are locksmiths, and know a fair bit more about lock design than most people. A good lock is designed so that, by altering the tumbler configuration inside it, you can change the design of key needed to open the lock. Thus, having the blueprint to the lock doesn't tell you anything about what kind of key, or what combination, is necessary to unlock it. You'd need to know the specific tumbler configuration of the lock. I'll use a key lock with pin tumblers as my example here: The tumblers and the key are essentially negatives of each other. As such, publishing the pin tumbler configuration would be the same as handing out copies of the key. Good thing the pin tumbler configuration for your lock isn't part of the blueprint -- just as the decryption key isn't part of the source code for a security application (unless the developer's a complete idiot or designing DRM software -- which might also mean the developer is an idiot). I don't go publishing my private OpenPGP keys on a publicly accessible website just because the OpenPGP software I use (GnuPG) is open source software. I'm glad the software is distributed under an open source license, though, because that means the software itself is more secure than it would otherwise be.