Security

Insider risk or threat: Which are you?

Some security articles refer to insiders as potential risks in one paragraph, but further on classify insiders as serious threats. Michael Kassner asks, are you a risk, a threat, or both? What's the difference?

I suspect some of you are wondering where I'm going with this. Well, I'm on a mission to demystify IT security and using words like risk and threat interchangeably sure doesn't help. To prove my point here's what dictionaries have to say about risk and threat:

  • Risk: Probability of damage, injury, liability, loss, or other negative occurrence. Caused by external or internal vulnerabilities and may be neutralized through pre-mediated action. Example: It's not worth the risk.
  • Threat: Communicated intent to inflict harm or damage to a person or property in order to force someone's compliance or to restrict his or her freedom. Example: His idea to steal your car was not an idle threat.

Before getting too much further into the discussion, I'd like to define insider as well, just to make sure we're all on the same page:

  • Insider: Person belonging to a limited circle of people who understand the actual facts in a situation or share private knowledge. Or a person in possession of corporate information not generally available to the public. Example: Insiders knew that the president would veto the bill.
Who's a risk

From a security stand point, everyone's a risk. That may seem harsh, but if risk assessment is to be of any use, that's the way it has to be. Quite simply, we all have the potential to negatively impact a company's well being.

Who's a threat

Are we all threats as well? Not according to the above definitions and Nicki Wallace, a writer for the RSA Speaking of Security. We all could be threats, but since most of us are nice people, we're considered risks simply because we make mistakes; you know that human error thing.

Threats are intentional

It's easy to come up with scenarios where an insider is deliberately causing harm. Security newsletters are full of articles about disgruntled IT workers seeking revenge, or individuals stealing Intellectual Property in hopes of making money or helping a new employer.

Mistakes are unintentional

Mistakes are a fact of life in IT, especially if you think about how fast anything to do with IT changes. Wallace refers to a CompTIA report mentioning how companies are becoming increasingly aware of this:

"Human error and negligence are bigger concerns among companies, than deliberate or malicious threats to their information security."

I realize that training organizations tend to promote their agenda, but it seems logical that user mistakes are happening more frequently. Wallace also agrees:

"Organizations cannot afford to turn a blind eye to the wider insider risk from employees who accidentally or negligently cause vulnerabilities to data or system security. When recession leads to cutbacks, organizations need to take special care: layoffs may force employees to take on more work, increasing the chance for mistakes being made or unwise shortcuts taken."

Why make the distinction

Even though the results may be the same, reducing the risk from intentional insider misbehavior needs to be handled in a completely different way than risk from accidental insider misbehavior. Threat prevention requires system-wide security measures, similar to those protecting the company's network from external threats. As for accidents, most experts agree that user education (blessed by upper management) is the best solution.

Final thoughts

Insider risk may be intuitive to many. But my experience tells me that most organizations do not differentiate between mistakes and threats when instigating security programs. What's your take? Am I wrong?

About

Information is my field...Writing is my passion...Coupling the two is my mission.

111 comments
JosB
JosB

Since I hit the 'maximum message level' I'm bringing this one down. Let me get more into this. Why should someone not be allowed to work from home on sensitive information or even trade secrets? Some programmers have strange working hours but many offices don't have 24/7 opening hours. Keeping staff in for a few programmers would probably be far more expensive compared to having them work from home where they only need remote access to the source code. Sure, this could lead to the situation where the programmer can steal the information. But then, he's working on the source already and therefor most likely familiar with whatever creates the advantage for the company. And since he's allowed to work from home there was a trust relationship. You need to understand that in many specific situations the benefits of someone working from home are far more than the benefits of keeping them locked in the office. This is an interesting story because the press thinks it's interesting. And because of that it's bad for Goldman Sachs since it's causing reputation damage even more than actual damage. There is no technology to keep a programmer working for your company. Well, there is but for some reason chaining someone to his chair for the rest of his life isn't allowed. And he will not be very productive because he's not motivated and far more likely to destroy whatever he can get his hands on. Anyway, since programming is far more about how to handle things and not so much about the actual source code (it only gives time advantage) you need to keep the programmers in your company. And you will have problems the moment they leave. Having them on payroll leaves a risk. You benefit from them but can also lose some. Access controls, loggings, counter-espionage technologies. Those won't work for people with legitimate access. The only question is: do these people have more rights than they should have based on their function? If so those rights should be taken away. Application of security measures past the basics should be balanced. This is done in the physical world all the time. Only a few specific places have a huge amount of safety measures, the rest is locks, gates, fences, some guards (not always 24/7 present. Why? Because security is a trade-off. You want to protect something valuable but it must be cost-efficient. You won't buy a $1000.00 lock to protect something with a value of $100.00 Get some common sense. When people come to me screaming that things are terrible wrong I have a couple of questions: - are people dieing or seriously injured? - is the building falling apart? - is the company going bankrupt? - will the company suffer from several years of losses because of this? For some reason people always tell me that "it's not that bad, but....". This isn't different in the GS case. Their main problem is that the press noticed some smoke and wrote that their entire building is burning down. Good PR is more important to solve this than restrictive and expensive security measures. In the end everyone (well, most) wants to be able to do their job as easy as possible. For IT this means shutting down stuff as much as possible. For business this means opening stuff as much as possible. Find the balance and you will do fine.

kumvinod
kumvinod

Well I am not a risk.. but rather a logical bomb. I take care to delete all the knowledge base created by me after taking the backup.. In short if you don't value my time and work .. I also don't care about the company's time and money

JosB
JosB

There is a real difference between the two and it has nothing to do with being human and making mistakes. Risk is something (very often an action) that has a potential negative outcome and a potential positive outcome. Sometimes risk is taken on purpose, other times it's without knowing. Threat differs from this because there is no upward side on it. The best you can get is a zero result and things can only get worse. Let's put this in a small example. I'm walking in Afrika without food and hungry. Suddenly I see a lion sleeping with a dead animal very close to the lion. When I move to that dead animal to take it I'm taking a risk. There is a huge downside, me being killed by the lion. But there is also a gain, me having food in my stomache. Now the lion is a threat. I have no benefit of the lion being there and it's very able to kill me. This is the difference between risk and threat. Then why are insiders a risk? They are a risk because having an employee is both a benefit and a liability for a company. Business cannot do without them. If they can, why are those people working there? It's better for a company to cut useless people out. So we need those people because they benefit the company. And that means that the moment they ain't there the company will not have that benefit anymore. This alone makes people a risk. However, in many risk assesments people only tend to look at the down side. People can do things that hurt the company in so many ways. Completely forgetting about the benefits of having the people around. So if you do a RISK assesment you should look at both sides. What's the potential benefit and what's the potential loss. If you only look at the downside you ain't doing a risk assesment, you are making a threat profile. That's something completely different. And don't think you can eliminate all risk. If you can, you are out of business. Because business only exists where risk is present. Else there would be gain without the possibility of loss, something everyone wants.

JamesRL
JamesRL

One can never be totally sure if something will trigger someone moving from a risk to a threat. We often have to rely on professionalism and our perhaps incomplete and flawed knowledge of our staff and colleagues. As a manager, I've often been privy to priveleged information. From who will be laid off in other groups to which companies we are partnering with, or are acquiring, to advance knowledge of strategic initiatives, there is a lot to consider risky. One way I try to deal with it, is to ask others to not share information with me that isn't necessary for my job or my department. So I may know that layoffs are happening, and when, but not who, unless that person interacted in some way with my group in a business process. As the site security person, I do have access to all kinds of other data - when people card in and out, and access to the video recorders. I do not go browsing through either of these, and would not do so unless asked by HR or senior managment. Our confidential systems are fairly well secured, so I can't see anything that doesn't relate to my own group. And thats a good thing. In previous positions, I have seen things that were highly confidential - executive salary plans, what nasty sites some people have visited etc. But I have always held that these things should always be held in the strictest confidence. If I expect others to respect my confidences to them, I should respect the confidence of other's information. The golden rule as it applies to privacy. Having said all that, how do people know that I will keep my word - thats trust, and its a hard thing to create and build in business, and it can evaporate in a flash. In these days of the exposure of coprorate greed and fraud, its hard to think that people will keep confidences about fraudsters and criminals. James

elrico-fantastica
elrico-fantastica

in his words "risk doesnt cover it" you kill one sql server and you never hear the end of it!

Photogenic Memory
Photogenic Memory

It's very confusing but that's how they roll. Morale is low and I guess that's how they like to keep it. You'd think business would like to make places worker friendly but making money is the bottom line and so is saving it. I guess in this case; I'm a risk and a threat no matter how well I work. I guess the only way to work around this is to open up own business.

Jaqui
Jaqui

I trust everyone. to be out to screw me over. that way any surprises are pleasant ones.

Ocie3
Ocie3

Quote: "...Insider risk may be intuitive to many. But my experience tells me that most organizations do not differentiate between mistakes and threats when instigating security programs. ..." In many cases, there isn't a need to differentiate between mistakes and deliberate harm, because measures taken to detect and/or to prevent mistakes will also detect and/or prevent intentional loss or damage (and vice-versa). With regard to IT, a rudimentary challenge is protecting data and equipment from loss and damage which can result from the loss of electrical power. The same measures taken to respond to loss of electrical power caused by a thunderstorm (or an automobile colliding with a utility pole) can also prevent loss or damage or by someone who intentionally causes the loss of electrical power, for example, by unplugging IT equipment (although intentionally unplugging the equipment might simply be a mistake). In that case, however, a person who intends harm might easily find a way to defeat measures which have been taken to secure the system against the risk of losing electrical power -- especially if they are an "insider" who knows what measures have been taken, recognizes their limitations and vulnerabilities, and finds a way to defeat them. The value of "inside knowledge" is why determining the amount of risk and detecting and preventing intentional harm by one or more "insiders" becomes very difficult to do effectively. But you cannot keep everyone "in the dark", and trying to limit the availability of "inside knowledge" (usually on a "need to know" basis) has its own set of problems. IMHO, the best response is to create a work environment which minimizes the risk that an insider will become hostile to their employer. People who look forward to coming to work every day because they not only enjoy the work that they do, but also because they respect the organization for which they work and enjoy working with their co-workers, make fewer mistakes and seldom pose a threat to their employer and to fellow employees. The risk of a conspiracy of two or more employees forming to engage in activities, such as embezzlement, that will cause loss and damage to the organization is consequently minimized as well. Creating and maintaining such an environment in the workplace is arguably the most difficult management challenge of all.

Tony Hopkinson
Tony Hopkinson

In that there are many things I could do, intentionally or otherwise. Measures to reduce the threats, knowing consequences and just being a nice guy :p make me less of a risk though.

Michael Kassner
Michael Kassner

Are you referring to the fact that you remove all the data related to your work when you leave the company?

Ocie3
Ocie3

and an interesting website. This isn't the first time that I've read about software development project managers, systems analysts, and even program coders, who took copies of their work with them while leaving an employer (whether voluntarily or involuntarily). I can remember when it wasn't usually illegal, too, if only because it was not possible to copyright software (neither binary nor source code). But computer tech workers typically had a Nondisclosure Agreement and, sometimes, a Non-compete Agreement, with their employer. So the employer could sue for breach of contract if they had any evidence that we had violated any of the agreement provisions. (Whether a judge would enforce one or more of the specific provisions was often an open question.) I don't know whether that remains a common practice among IT firms now.

Tony Hopkinson
Tony Hopkinson

Employee has access to the valuable code, has access to the internet, so he's capable of copying it to a location outside the company's control. That threat isn't countered, because they are such nice guys. Seems to me some manager has surrounded himself with incompetents in order to look better, because this 'theft' had all the hallmarks of Baldric's least cunning plan. As I keep saying, address the threat first, how much resource you spend on doing that depends on consequences and likelihood.

Michael Kassner
Michael Kassner

Your pain. I've done something similar in my career. My only consolation is that it does eventually get better.

Michael Kassner
Michael Kassner

I see all of us being risks, how big a risk is dependent on how prone to accidents we are or how much intent we have on doing harm. It sounds like the business you described may have increased risk from both.

Michael Kassner
Michael Kassner

Yet, creating an environment to eliminate any chance of intended harm or threat has to be really difficult to accomplish. Especially when considering multinational companies. I really appreciate your post. I worked pretty hard on this topic and realized immediately that you have a deep understanding of the subject.

Michael Jay
Michael Jay

I could be a risk, but I will not go there at least not with intent, making a mistake, perhaps as I am good at those but they are usually not a risk, just an embarrassment.

Michael Kassner
Michael Kassner

Yet, you realize that analytically all humans are a risk as they have at least a 50% chance of making a mistake.

Michael Kassner
Michael Kassner

Even in my world. I'm not a programmer, just a network engineer/security admin and I have to sign NDAs all the time. Non-Compete doesn't really apply as I'm a consultant.

Michael Kassner
Michael Kassner

Address the threat in this case? DPL or some kind of file extension monitor?

RU_Trustified
RU_Trustified

1) There is no patch for stupidity. 2) Complexity and poor design of systems increases risk; accidental misconfigurations can increase risk and enable threats. 3) Your most trusted insider can be compromised, or go off the beam. 4) Systems were not designed to be secure, but to share information. 5) Using authentication as a proxy for authorization is ineffective. 6) If you can't 100% trust your staff, you need a trusted operating system. 7) Without enforcement, there is no security. 8) Resulting damage from unintentional error can be as great as, or worse than intentional attacks.

Jaqui
Jaqui

at everyone as a threat, by the definition in the article, and plan accordingly. the "paranoid" outlook has protected the systems from accident as well as intentional harm. I may [ ok, I do ] go to an extreme by using the model I do, but it does help in keeping systems and data secure. The environment provided by unix and likes help with the security, without a major loss of usability. With Windows, applying my model would destroy usability.

Ocie3
Ocie3

I appreciate your response to my comments very much. IT security is a tough row to hoe, and it can become complex. There are, however, some good books on the subject. For a start, just ask about the one(s) chosen for a postgraduate course on "computer security" at a reputable graduate school that offers an MBA in Information Systems Management.

Jaqui
Jaqui

the secure multiuser system built into the Unix and Unix-like operating systems give you that environment out of the box. The only environment where it is hard is M.S. Windows, because they are still playing catch up in the security area.

Michael Kassner
Michael Kassner

My definitions, if you agree with them. Just being a human makes all of us a risk. You referred to intent, that changes the risk from being a mistake/accident to a threat. Intent makes a huge difference.

RU_Trustified
RU_Trustified

This is the way I see it. Work should have been done on the premises. Ideally, policies would be enforced so that code could not be uploaded off site. This presents a real life example of the need for mandatory access controls to prevent unauthorized release of information, and immutable audit logs to detect the attempt. Excuse the shameless plug (you may be getting used to it), but this is why counter-espionage technologies are required, and more so when the stakes are highest. Finally, management should have realized their asset and kept this employee very content, and on staff.

Michael Kassner
Michael Kassner

Keep it up, lots of good information and opinions are surfacing.

JosB
JosB

According to Reuters on Aleynikov: "He told investigators that Goldman Sachs knew he worked on the program from home and said nothing about it previously, according to the court record." You are thinking technology again and technology is not the problem and not the solution. And non-compete clauses? No sensible and good programmer with some trackrecord in the financial world will sign those. Why would they, someone else is very willing to hire them without such a document. It's really hard to find people with the knowledge of financial instruments combined with programming skills.

Tony Hopkinson
Tony Hopkinson

under EU regs. They are still in contracts and some of us still live by our given word, but you've haven't a hope in hell of stopping an employee going to a competitor.

RU_Trustified
RU_Trustified

Sure an individual that has been involved with the concept of something has potential to re-create it, possibly even improve on it. In this case, obtaining source code is the low hanging fruit. Utilizing counter-espionage technology such as immutable audit logs to provide forensically defensable audit trails can act as a deterrent, if users know about them, and MLS/MAC can enforce policies for access and use of data to prevent such scenerios in the first place.

JosB
JosB

I have one advantage with my job. I don't have to speak tech, I can speak process and business. I was doing a BCM scan a couple of weeks ago and several people mentioned one person as being a problem. Not that he would go away or steal something, just because he's doing a critical job alone with hardly any back-up. Management is aware of this situation but cannot solve it because that kind of knowledge is hard to obtain. The same thing goes for financial algorytm programmers. You have them and can do your job, you lose them and doing your job will be harder. Now this guy took some code with him. Suppose he also designed a large part of it. It would be very possible to recreate that code somewhere else with a competent team without having the source. Having the source only speeds things up. It's very hard to prevent those situations. Management needs to be aware of them, that's for sure. But it's their call if they want it to persist or that they will take action. And action should not be technical measurements but focussed on the business process. What will happen if your 'secret' system get's known by competitors, in whatever way possible? And how could that happen and is it possible to prevent? Technology is only part of this, never the complete solution. It could well be that this situation could have been avoided by giving the programmer a raise. We'll never know....

Tony Hopkinson
Tony Hopkinson

so encryption non standard compression etc, will 'fool' it. Same as the file extension thing, just rename the file, trivial. They are tools to sell to clueless management. I think because of my definition of threat in this particular scenario , all I could do is manage the risk, which is the employee. If we use your working definition of employee as a threat of variable theateningness ( :p ), then we'd both be attacking it from the same angle.

Michael Kassner
Michael Kassner

If Data Loss Prevention technology would have helped or not. It may have at least prevented copying. But, I suspect that why the person encrypted it, to fool the DPL device.

Tony Hopkinson
Tony Hopkinson

I'm not sure there is a useful technical solution to this one. It certainly be made harder but there would be a productivity reduction. Let people know you are watching posts over the net, they save it to a jumpdive, watch/block that, they print it out, watch that, they memorise it. :p No internet access,and no external storage may be.... The thing that really stands out to me is the guy who did this got caught because he's complete idiot,so maybe the real threat is employing incompetents

nanotechsoftware
nanotechsoftware

I have always maintained that bad/little security is worse than no security. not that I am advocating a policy of no security of course but if you have no security (be it in IT or general), and everyone is aware of that they will take precautions; however if you have security the responsibility is lifted from individuals and due to defered responsibility people become careless; entirely relying on that security. If that security is shambolic and provides no real security, all that has been achieved is the lowering of ones guard which has ultimately rendered you more rather than less exposed.

RU_Trustified
RU_Trustified

Right Micheal, So much security is an after-thought. If you think about it, mainframes were secured by a key to the computer room, (although students quickly learned how to hack in). The PC was modeled after the mainframe and soon became a distributed environment connected to the internet. With distributed computing came distributed risk. By authorization component,I mean kernel level behavior enforcement that acts pro-actively to effectively neutralize attacks while it governs file access, sharing and manipulation post-authentication. Why are systems affected by malware? They can piggy back into a system through an authenticated user, or his mistakes, and then are free to carry out their mission if they can get by filters, and maybe social engineer an unwitting user to assist them. But that is just a numbers game: they can attempt many times and they only have to succeed once. Look at it this way. We are getting a pretty good handle on securing data in storage and in transit, but where do many insider breaches and espionage take place? When data is in use. That very difficult aspect of IT security is where the authorization component comes in. You really can't have counter-espionage protection without it. That means even protecting against admins with passwords. So is it enough to authenticate a trusted insider into networks or systems, along with a wink and a boy scout promise? @ocie3, An authorization structure need not impede legitimate data sharing that has been approved by the security officer. What it must do, is provide enforcement of policies and prevent work arounds so that rules can not be bypassed. Only then will there be progress made in this area.

Michael Kassner
Michael Kassner

If that comment was made to point out that systems were inherently insecure and authentication was an after-thought.

Ocie3
Ocie3

Quote: "4) Systems were not designed to be secure, but to share information." The two are neither mutually exclusive nor does either aspect particularly affect the other. Authorization structures are often used to prevent, at the least, "outsiders" and intruders from using the system to gain access to data, and to the products of processing it, as well as to prevent them from creating, altering and/or destroying data, and/or other system resources such as software. That does not mean that an authorization structure inherently limits the sharing of information among people who are authorized to access and use the data and/or the products of processing it. They might even have authorization to share data, and/or the product(s) of processing it, with people who do not have authorization to access the data, via the system, themselves.

Michael Kassner
Michael Kassner

Could you go into more detail about this statement: "Using authentication as a proxy for authorization is ineffective." I think I understand, but want to make sure.

Michael Kassner
Michael Kassner

I'm glad you commenting. We need some detailed info here about how you can resolve this problem.

RU_Trustified
RU_Trustified

Michael, Full disclosure: I am with a vendor but I present our technology in these posts for the purpose of presenting an alternative approach to solve the IT security problems in question. Something easier in this regard, suitable also for SMBs, is multilevel integrity. Trustifier allows easy ranking of code, files and devices for secrecy and integrity. By ranking code higher in integrity than any user, that code can not be altered or destroyed accidentally, or intentionally via the so-called accidental-deliberate data dump.

RU_Trustified
RU_Trustified

when you say... "In principle, there is no system of security that cannot be defeated by the collusion of enough people who have respective authorizations to access, create, use, alter and/or destroy the protected resources." You may not have seen a few of my posts on Michael's ghostnet article, where I described a counter-espionage technology that we offer. The combination of multilevel security for secrecy, multilevel integrity and multiple domain separation can combat a lot. We can also provide a "trust credit" system on all individuals inside a system so that if a colluder in authority is granting extra privileges to someone for info gathering aggregate data, or whatever, it can only go so far before his suspicious behavior is flagged.

Michael Kassner
Michael Kassner

On a good backup system.What does that entail to you? I know of very few SMBs that are using a backup system that would prevent the accidental or intentional write over of good files. Only enterprise organizations seem to have something similar to a grandfather/father/son system that would prevent overwriting existing files.

Ocie3
Ocie3

In my experience, the measures that you describe are features of Unix and Unix-like operating systems (I don't know whether Apple's Mac OS X merits inclusion). They primarily prevent unauthorized access to and usage of system resources. Regardless of the OS, we know that making file backups can mitigate the accidental alteration or deletion of data and processes (programs and scripts), and/or the improper operation of equipment. An authorization structure, and having a reliable file backup subsystem, is what I would call "a good start". It can be an effective deterrent against "outsiders" and intruders, and their attempts to access and use the system can -- should! -- alert those who are responsible for its security. However, whether the data is valid and has integrity doesn't necessarily depend just upon the people who have access to it, or the authorization to create, alter or delete it. The origin of most data is from outside the system per se. So we must inevitably "consider the source": why, how and from whom data is obtained, authenticated, validated and entered into the system. Of course, securing a system from unauthorized installation, alteration and/or usage of software doesn't make the output of any of the software valid. It does offer protection from unauthorized alteration of the software that is intended to make its output invalid in some respect(s). With respect both to data and to software, authorization in and of itself doesn't stop anyone from making an "honest mistake" while exercising their privileges. Of course, whether a person is reliable and _competent_ to do the work with which they will be tasked is typically considered before any privileges that they will need to do it are granted. Still, it can be impossible to ascertain whether an action is, first of all, erroneous at the time it is effected, then whether it is accidental or intentional. That said, experience has shown an inherent risk. Namely, that some of the people who are _authorized_ to access data, change data, or to install software, etc., can and will abuse that authority. Giving someone admin privileges means that the person (it may be a committee) who has the authority to do that trusts the person to whom those privileges are given. It doesn't mean that the person to whom they are given is, or always will be, trustworthy and incorruptible. At the outset, authorization is ultimately a matter of the judgment of the person who has the authority to grant the privileges, and it is a decision that should be reviewed periodically. That can include an "audit" of how each employee has used the privileges that they have been granted, insofar as that should be recorded in a system log(s). Experience has also shown that usually a "lone perpetrator" is the most likely to be caught before his or her actions cause a catastrophe. However, when two or more people act in concert, they may find ways to use their respective authorizations to corrupt and compromise the system -- the data in particular, but often the software as well. They usually do that because they discover an "irresistible" opportunity to embezzle money or steal inventory, even to obtain services for their benefit and profit but for which their trusting employer pays. Since they are "only doing what they are authorized to do" with the system, they often believe (whether correctly) that they are unlikely to be caught. Such collaborations are often very clever, difficult to anticipate and not easily detected while they are in progress. There is almost always some evidence that something is wrong, and perhaps that someone is doing wrong, but determining why the situation is as it is can be very difficult to do. In principle, there is no system of security that cannot be defeated by the collusion of enough people who have respective authorizations to access, create, use, alter and/or destroy the protected resources. Again, IMHO, the best prevention is establishing a work environment that lessens the likelihood that anyone will have a motive(s) to look for opportunities to do wrong, to exploit any that they find or to act in concert with others to do so. All of the locks in the world won't do any good if the sexton has a motive to open the door to thieves.

Michael Kassner
Michael Kassner

I sensed that organizations that realize they need to make an attempt to protect against accidents as well as disgruntle workers are doing what you mentioned and even more.

Jaqui
Jaqui

but if you use the model I mention below, you can reduce the possibilities drastically. [ cables secured and not where anyone can walk to stop the risk of trip damage. ] users have only those permissions required to do the work expected of them, to reduce damaging of system by accident. [ Network storage for all files, and it takes admin privs to delete any file. users can only alter the content, yet the backup of original version stops them from just wiping the content out permanently. users can only change ui settings on the system, no install of software privs... ]

Michael Kassner
Michael Kassner

Is an inherent risk. By just tripping over a power cord during a critical data transfer. Or turning into a threat after getting mad and using a hammer on said server.

Michael Kassner
Michael Kassner

I'm still digesting it, but as far as I can see I agree with your assertions.

JosB
JosB

Risk is creating or having a situation where you think you would benefit from but where a negative outcome is also possible. Threat is creating or having a situation where only a neutral or negative outcome is possible. Let's go with your example of employees stealing trade secrets. Most likely they should have access to those secrets so they can do their job better. But what is the actual threat of the employee stealing them? It's either that you cannot do your job anymore or that (future)competitors gain advantage. If the documents are stolen but not used for competitive advantage there is no threat if you have a copy. The risk here is that certain employees need access to documents that are considered secret. The threat is those documents getting destroyed without having a backup or the secrets getting in the hands of competitors. Now those employees might not even be disgruntled and 'give' the secrets to the competitor or destroy them. A simple delete or careless shredding is all that's needed. Or they put them in a briefcase that gets stolen. Or their computer gets compromised by a virus and the documents end up on the internet. The moment we are talking about people with no right to access the document trying to take them we are talking about insider burglars. That's a completely different situation. I know the documents on insider threats, for example those provided by the SANS institute. But when you take a look on what's going on it's very often that people have far more access or privileges than they should have based on what they need for their job. That's a threat, they don't need those rights so there is only a downside to it. Take for example the case of Aleynikov you mentioned. I suppose he needed access to the code to do his job. Now suppose he was able to memorize all the essential code (that's most likely not the full 32MB of code) or even created that code himself. If he moved to company X the code would still be lost to the competitors. Taking the code with him would only have saved some time in that case. Business exists because people are willing to take a risk. And the more risk they take, the more profit is usually possible. But the downside is also greater. The question in the Aleynikov case is far more how much additional benefit there was from him having access to all the code and what the possible benefits from restricting him could have been. To give a financial example: suppose we have invested with strategy X for several years. And because of market changes we suddenly hit a loss of $500.000, of which $200.000 was not in the risk margins. Now people will scream and such because this was not anticipated and outside margins. But if in the previous years the net gain was $5.000.000 which we would not have had if we didn't invest in that strategy, was this really a bad decision? Specially considering that the market changed in a way very few anticipated.

Michael Kassner
Michael Kassner

The subjective versus objective comparison. That makes it understandable and explainable. Thanks Tony.

Michael Kassner
Michael Kassner

I'd like to hear it. I find it strange that intent doesn't play any part in your decision making. What do you consider it when disgruntled employees steal trade secrets?

JosB
JosB

Intent has nothing to do with this. You are either at risk or you ain't. Let's assume my business will go bust if there is a transaction error of $500.000,--. That's the risk. Those transactions are part of business so I cannot avoid them. It doesn't matter if an accident or intentional action of someone causes that transaction error larger than $500.000,--. The threat here is me being able to process a transaction that has serious consequences for business alone. It most probably doesn't have any business benefits (except in small companies) and it has a huge downside. It's for good reasons that in payment process there often are limits to the amount of cash a single person can process and it's not only to prevent fraud. Seems I don't agree with your definition of risk (and working in the financial world probably influenced my view on risk a lot).

Tony Hopkinson
Tony Hopkinson

risks can only be reduced. Hmm a safety scenario , say working at heights. Use a harness or not? What risk does wearing harness remove, falling, or hitting the floor with lethal force? Now whether I had a dizzy attack, came to work p1ssed as a fart, had an ill thought out plan for suicide, or was pushed off the top by my boss for telling him he was a pratt is irrelevant, what brains I have wouldn't be splattered on the concrete fifty feet below. That's what isolating the threat does for you. Or you could institute a medical, drug test, psych evaluation and tape up my mouth before I go up. Risk asessment is essentially subjective, threat(capablity) assessment is objective. Threat = capability. A three-year old boy and a thirty-three year old heavyweight boxing champ both offer to knock you out with one punch.... Which one do you step away from immediately?

Michael Kassner
Michael Kassner

Tony, Could you go into more detail as to why you think that way. It's fascinating to read these different viewpoints and helps me to get a better grasp on the subject.

boxfiddler
boxfiddler

the difference between author and subject. etu

Michael Kassner
Michael Kassner

I'm with you now. You as well, I am sorry to have missed it. But , my client footed the bill. Ironically it was a Hughes Satellite issue after all. Not a network issue that Hughes insisted upon.

Michael Jay
Michael Jay

my reference to intent was that I have no intent to be a threat. Also, Great to see you at the gathering and sorry you got called away.

Editor's Picks