Banking

10 best practices for Windows security

Security risks continue to grow for organizations both large and small. Brien Posey offers 10 suggestions for making sure your networks are as well protected as possible.

Although network security has always been important, the stakes have never been higher than they are today. Network security should be a major priority for every organization. These 10 simple tips can help.

Note: This article is also available as a PDF download.

1: Reduce the attack surface whenever possible

One of the first steps you should take when hardening a machine is to reduce its attack surface. The more code that's running on a machine, the greater the chance that the code will be exploitable. You should therefore uninstall any unnecessary operating system components and applications.

2: Use only reputable applications

Given the current economic climate, it might be tempting to use freeware, deeply discounted, or open source applications. While I will be the first to admit that I use a handful of such applications in my own organization, it is critically important to do a little bit of research before adopting such an application. Some free or low cost applications are designed to serve ads to users; others are designed to steal personal information from users or track their Internet browsing habits.

3: Use a normal user account when you can

As a best practice, administrators should use normal user accounts when they can. If a malware infection occurs, the malware generally has the same rights as the person who is logged in. So of course that malware could be far more damaging if the person who is logged in has administrative permissions.

4: Create multiple Administrator accounts

In the previous section, I discussed the importance of using a regular user account whenever possible and using an Administrative account only when you need to perform an action that requires administrative permissions. However, this does not mean that you should be using the domain Administrator account.

If you have multiple administrators in your organization, you should create a personalized administrator account for each of them. That way, when an administrative action is performed, it is possible to tell who did it. For example, if you have an Administrator named John Doe, you should create two accounts for that user. One will be the normal account for day-to-day use, and the other will be an administrative account to be used only when necessary. The accounts might be named JohnDoe and Admin-JohnDoe.

5: Don't go overboard with audit logging

Although it may be tempting to create audit policies that track every possible event, there is such a thing as too much of a good thing. When you perform excessive auditing, the audit logs grow to massive sizes. It can be nearly impossible to find the log entries you're looking for. Rather than audit every possible event, it is better to focus on auditing only the events that matter the most.

6: Make use of local security policies

Using Active Directory based group policy settings does not nullify the need for local security policy settings. Remember that group policy settings are enforced only if someone logs in using a domain account. They do nothing if someone logs into a machine using a local account. Local security policies can help to protect your machines against local account usage.

7: Review your firewall configuration

You should use a firewall at the network perimeter and on each machine on your network, but that alone isn't enough. You should also review your firewall's port exceptions list to ensure that only the essential ports are open.

A lot of emphasis is typically placed on the ports that are used by the Windows operating system, but you should also be on the lookout for any firewall rules that open ports 1433 and 1434. These ports are used for monitoring and remotely connecting to SQL server and have become a favorite target for hackers.

8: Practice isolation of services

Whenever possible, you should configure your servers so that they perform one specific task. That way, if a server is compromised, the hacker will gain access to only a specific set of services. I realize that financial constraints often force organizations to run multiple roles on their servers. In these types of situations, you may be able to improve security without increasing costs by using virtualization. In certain virtualized environments, Microsoft allows you to deploy multiple virtual machines running Windows Server 2008 R2 for the cost of a single server license.

9: Apply security patches in a timely manner

You should always test patches before applying them to your production servers. However, some organizations really go overboard with the testing process. While I certainly do not deny the importance of ensuring server stability, you have to balance the need for adequate testing with the need for adequate security.

When Microsoft releases a security patch, the patch is designed to address a well-documented vulnerability. This means that hackers already know about the vulnerability and will be specifically looking for deployments in which the patch that corrects that vulnerability has not yet been applied.

10: Make use of the Security Configuration Wizard

The Security Configuration Wizard allows you to create XML-based security policies, which can then be applied to your servers. These policies can be used to enable services, configure settings, and set firewall rules. Keep in mind that the policies created by the Security Configuration Wizard are different from security templates (which use .INF files) Furthermore, you can't use group policies to deploy Security Configuration Wizard policies.

About

Brien Posey is a seven-time Microsoft MVP. He has written thousands of articles and written or contributed to dozens of books on a variety of IT subjects.

25 comments
Chashew
Chashew

Ok lets pretend I am a greenhorn for a second. How do I check my firewall ports? What firewall is better? And what can be encrypted safely on Win 7? A lot is said about what to do, but nothing about how to deploy fore said functions is ever posted. Please use the KISS method so I can post it for my online friends.

Paul
Paul

Look, at the current rate of mfgr cost in the US, around $7-800 I understand, and $280 cost in Ukraine, then under $100 in China, I can finally, after the Chinese buyers accept 20-50k, and mfgr agreements, start to sell to SOHO, then, you monkeys will start to 'check it out', when it becomes within the normal $70 per year spent by online users per year world wide... then, send you pretty girlfriends by to get you a discount ))))) on second thought... don't... I've lived in Ukraine for near 20 years... Enough pretty girls!!

B.Kaatz
B.Kaatz

"One of the first steps you should take when hardening a machine is to reduce its attack surface. The more code that's running on a machine, the greater the chance that the code will be exploitable. You should therefore uninstall any unnecessary operating system components and applications." Isn't that going against the Microsoft paradigm of loading every possible thing an end-user *might* need, whether of not they want it, use it or even have the actual hardware to support it? For those running Win7 and do not use the Windows Messenger, how many have disabled the service and uninstalled it? Right. And, it's not just Microsoft... Any of the generic (READ: "vanilla") distributions/builds of linux do it, too. That is the purpose of having a vanilla system: Being able to add the hardware and already have the driver support for it available. If you want to truly strip down your system to what you need, you will have to compile your own kernel. But, of course, thereafter you will need to recompile it for every new piece of hardware you want to add. This is a balancing act that few end-users have both the skills and access to do (Sorry, Microsoft and Apple users, but you try and you violate your EULAs.), and even fewer have the most important commodity required for consistently rebuilding and updating their their systems in this fashion; Time.

Neon Samurai
Neon Samurai

Fixed it for you. (I couldn't resist.) Seriously though, Open Source by it's nature tends to be far more trust worthy; especially in terms of security related applications. A FOSS developed program with backdoor, sniffing or similar functionality hidden inside it becomes found out pretty quickly. with it being an information and reputation based economy, FOSS developers have more forces pushing against introducing malicious code rather than for it. After all, the source is available for anyone to see; it takes only one curious security researcher to blow a developer conspiracy wide open. Evidence and history has shown that "Some free or low cost applications are designed to serve ads to users; others are designed to steal personal information from users or track their Internet browsing habits" is not a common or welcome trait in open source software. Don't take FOSS on blind faith either, but realize that it's not remotely as susceptible to malicious developers as more secretive development methods. Freeware and similar closed source produced programs.. absolutely agree. You need to be aware and vett it in a lab with wireshark or similar to see what traffic it's actually transmitting. The secrecy of the code breeds poor choices like exploiting one's user base by harvesting information or system resources. Number 4; love it!! I've often notice the log's lack of differentiation between administrators sharing the same account. It's not even a Windows limited trait; share Root on a *nix box and your logs will show Root doing stuff with no representation of what person was actually on. You may be able to correlate with su if you setup properly to block Root from logging in initially (log in as User, su to Root or Sudo). 5. Logging.. I wish Windows managed logs more like the Unix world; flat text files archived historically based on size or regular rotation schedule. The active log file doesn't get too big. The historical log files compress down to nothing and remain available if needed. No special tools are required to read logs on a system that won't boot to GUI. And they are useful rather than sparse and cryptic. I don't need a special utility just to look up the english meaning of obscure error codes. 6. local policies. Do I understand correctly that the effective policy would be a combination of the local and AD policy or does AD negate the local policy? it seems to be the first case but always best to confirm if unsure. I wish there was an easier way to replicate local policy personally. I've not had much luck with exported policy templates and manually setting local rules gets tired when your doing more than two machines at a time. 8. "if a server is compromised" then it was a criminal not a hacker that did it. A real hacker won't break into your system without prior permission to carry out the penetration test. Don't weaken your article by promoting the fallacy that Hackers commit criminal acts. Another way to explain it; some Hacker skill may have been involved in discovering vulnerabilities which can be used to break into a server but that's where any real Hacking ends. The Hacker won't then use those findings to carry out unauthorized breaches. Criminals copying methods previously discovered by Hackers does not make the criminal a Hacker too. Now, the interesting question. What circumstances does MS allow one to virtualize multiple installs under the same single license? Wouldn't that be limited to either Microsoft's VM software or possibly VMware? In both cases one is going to be paying for the VM infrastructure. perhaps a Win host, VM layer and multiple install Win guest license works out to less than a Win host, VM layer and several seporate Win guest licenses. The thing is, the hardware is usually the less expensive component of the server; 2 grand for a box and 4 grand for the Microsoft license running on it (Win Server + Exchange + user licenses for Exchange.. it adds up quickly). 10.. I have to look into this bit of sweetness.. that sounds like one of the outstanding things I've been looking for to replicate my method of templating Unix like systems.

robo_dev
robo_dev

Is this article meant for Windows servers on a local network, Internet-facing Windows servers, individual home users, or corporate PC users? It would be helpful to state that this is aimed to corporate PC LAN workstations, or at enterprise Windows servers, if that is the case. When you say 'Windows Security' that covers products from a Windows phone running CE to a Windows SQL database or IIS web server. Network security and OS security are not the same thing. In fact it could be argued that these two things are in direct conflict with each other quite often, so it may not be useful to use the two terms interchangeably. And there needs to be context here as well; the network controls and security requirements for a Web-facing Windows server are quite different than those of a typical corporate LAN workstation. Does it matter if my port 1433 is open on my workstation? Yes, if it isn't then my corporate SQL database apps won't work very well. Context: The article only mentions firewalls briefly, without stating the context; is this the PC personal firewall or the corporate firewall that is being discussed? Most Windows database or application servers on an enterprise network are not going to be configured to use the Microsoft firewall services, in any case. The security controls you put in place for an Windows IIS server make no sense for a workstation, because you then break some very useful things, such as printing, for example, or browsing the network neighborhood. One would hope that PCs on a private home network should not have to worry about attack surface, unless their kids are really skilled hackers. And, seriously, unless you're off your meds, how many workstation users do audit logging and maintain multiple administrator accounts? (or even know how to enable audit logging ?) Isolation of Services sounds like Service Isolation, which is a very different thing. It might be better to simply recommend to use more virtualization. Of course if you put your ten most important apps in ten VMs all one one server with one motherboard, you've created one Titanic-scale single point of failure. In general, you're not reducing the risk very much by spreading your apps over multiple servers or VMs since whatever exploit worked on one server will surely work on all the others.

oldbaritone
oldbaritone

Set up and enforce access control policies, periodic password change policies, et cetera!

Deadly Ernest
Deadly Ernest

install Zorin OS Linux and avoid most of the attack software, the firewalls etc they have are simple to use too.

robo_dev
robo_dev

So the same advice does not really apply to a Windows IIS server versus your grandmother's Windows XP desktop. In the case of a web server, obviously if there is any attack surface, it's going to get bothered more than a Victoria's Secret model at a computer convention, while if you start shutting off services and closing ports on grandma's PC, stuff like printing or network browsing will stop working. And, of course, the PC is going to have a whole steaming pile of shovel-ware, including that God-forsaken Windows Live set of apps, and the 30 useless apps that the PC maker decided that you 'need' on your PC. If I buy a Dell, I don't need the Dell Support App, the Dell Documentation App, the Dell ISP signup App, etc. Then when I install my Canon printer, there's a freaking app that phones-home to Canon to register the other 20 applications it included with the printer.....but I digress.

Neon Samurai
Neon Samurai

And if one is going to change the OS for security reasons, there are better distributions than Ubuntu to select.

robo_dev
robo_dev

To clarify the virtual license issue, it does not really work the way the OP stated it. IF You're a volume licensing agreement customer AND you buy Windows 2008 R2 Enterprise THEN You can run four VMs of that OS on one license. http://www.microsoft.com/licensing/about-licensing/virtualization.aspx If you're a VPA customer running Windows Server 2008 R2 Datacenter: You can run any number of software instances in physical and virtual operating system environments on a server. BUT CALS are sold separately with Datacenter. So Datacenter effectively costs more, since it would appear that you would need to buy CALs for each virtual instance of 2008, since it does not come with any. So in terms of the statement "Microsoft allows you to deploy multiple virtual machines running Windows Server 2008 R2 for the cost of a single server license" that's not 100% accurate, It's like saying, this car I'm selling has four wheel drive, if you buy the four-wheel-drive option.

bobbyrambo
bobbyrambo

I was wondering the same thing. Anyways, at home, I use geswall to isolate my applications from the internet. Why don't corporate office networds use something like that?

JCitizen
JCitizen

in this area. There were the local machine accounts and then the domain accounts that were to be allowed. Domain for clients were on servers, while local machine were only techs with local administrative rights. Of course the server had local accounts too, but extremely limited. Everything he said made sense to me, but then I didn't read much into it.

Neon Samurai
Neon Samurai

In my view, both should assume the environment is hostile by default and be configured accordingly. Don't reduce system security just because the server isn't pointing outward. That's like leaving all your desktops without firewall and AV because you have a perimiter device filtering and scanning inbound traffic. In terms of different ports open and such, that's does have to do with the services being accessed remotely but it doesn't really change the article as specific firewall settings where not given. Regardless of machine, you should be closing down everything in the firewall then only opening the minimum required by that machine. It may change the attack surface of each seporate system but it doesn't change the config method.

Paul
Paul

Please, tell me who as a SOHO makes these rules... I think I need some of what you're smoking as a 19 year Ukraine resident )))

Chashew
Chashew

I have dabbled in Ubuntu but most people are fixed on Windows-I like the idea of going cold turkey from MS but I feel the learning curve might be to much for some. Kinda like the change from 3.0 to XP. At any rate is there a stand alone Firewall that does what we expect ? Or can we still expect hackers until the end of time ? Will Zorin OS build as a dual boot next to a Windows OS like Ubuntu ?If not does it take a complete HD wipe and fresh install ? Thanks so much for the help as some of the data is either left out or missed on tech talk for the average joe like me.

Neon Samurai
Neon Samurai

I've tried to stay as far away from dealing with the quagmire that is MS licensing structures but it's becoming required knowledge for my work. Thanks for the clarification.

robo_dev
robo_dev

Typically internal servers are going to have ports open to run tape backup agents, do database performance monitoring, provide access to the database, etc., etc. The internal server may have a dozen open ports, and may also have a whole lot of user accounts, and a good number of applications. Much more attack surface, much greater chance that there's a weak password. It's not intentionally insecure, but the user/application requirements make it so.

Deadly Ernest
Deadly Ernest

and I find it much easier to transition people away from Windows. I have a lot of older clients who don't like Win 7 and hate the idea of Win 8, so Zorin is an easy transition for them. From experience I've found it easier to personalise it in the Gnome desktop and then use the built in changer to switch it to the XP or Win 7 desktop, and if you pay for the extra versions you can even get a Win 2000 desktop. If you don't like the firewall they have, then you can download others which are free. As to a general firewall, to properly protect Windows you don't need a firewall on your Windows system, you need it on another box between your system and the Internet. There are a few devices you can buy that do this as well.

JCitizen
JCitizen

it's enough to drive a man crazy!!!

Neon Samurai
Neon Samurai

I thought SP3 finally had some outbound filtering as did the shiny new Win7 firewall. Either way, if something can get admin rights your Windows firewall is boned since they can simply open any desired port as you point out. "The situation is not very different on a Windows server, really. If the most common ports used for attack are open on the firewall, then what is the firewall really doing for you?" This is where I limit services to given IP ranges as minimaly required. It's not if a port is open or closed but from which addresses it is open or closed. My real understanding of network packet filtering came with building *nix servers after growing up on Windows boxes. You can't even get a TCP three way handshake without allowing it in the firewall if setup properly and prot/source/destination values are an expected given iptables -A INPUT -p TCP --dport 22 -s 192.168.0.5 -d 192.168.0.10 -j ACCEPT iptables -A INPUT -j DROP If it isn't a TCP packet on port 22 from ip .5 going to my NIC with ip .10 then drop it like a hot potato. it'll ignore .4 and less or .6 and greater. I won't even be able to hit the loopback localhost without allowing it through a nother iptables line. Granted, a hardware firewall is even better but no one is going to put a Sonicwall infront of every network node. There is a nifty usb dongle which includes firewall though I'm not sure how effective it is. Still, if software is all you got then it's better than nothing. Anyhow, my original point was simply that the requirnments of the server (SMB available to all internal) does not change the security aproach of minimizing surface area regardless of server location. If your server requires those frequently attacked ports be open to your entire LAN then that's your minimum not justification for opening additional unrequired ports. (edit) spelling but I'm sure I missed some still.

robo_dev
robo_dev

My only point of difference involves the use of firewalls on local machines, and what we call a firewall. Of course, on a workstation on a local network, the Microsoft firewall is not all that useful, versus a personal firewall that's part of an AV package. Since Microsoft firewall does not block any outbound packets, and you need to open up the ports for Microsoft networking to work, there are a very limited number of scenarios where the cost, in terms of performance and support issues, makes it worthwhile to use at all. The situation is not very different on a Windows server, really. If the most common ports used for attack are open on the firewall, then what is the firewall really doing for you? Just because a firewall is not blocking a port does not mean there is a service listening at that port. Don't forget too, that the Microsoft firewall can be manipulated programatically quite easily, so if a virus wants to open a port, it can do that. I'm not trying to argue, but rather to consider what I typically see on large enterprise networks. Of course, that's my opinion, I could be wrong....

Neon Samurai
Neon Samurai

I read the article to be discussing an aproach to security versus natural attack surface remaining after the application of the security aproach. The suggestion seemed to be that one should minimize the attack surface not that the attack surface should be uniform across differing systems. The difference in minimized attack surface seems an attribute of the servers responsabilities not the method used to harden it. Internal and external webservers may require Drop All with the exception of http/https ports for given ranges of authorized source addresses. The minimum required open ports do not differ due to the location of the server. The attack surface only differs by the ranges of authorized source addresses. One hasn't used a different security aproach to reach those differing minimized attack surfaces. The location of the server does not change the security aproach. For a web server and file server, one is absolutely going to have a different natural minimized attack surface by the very fact of differing access requirnments. The internal webserver requires at least two open where the internal fileserver requires three or four open. Here, the open ports may differ while the authorized source addresses do not. The common security aproach still keeps a firewall on the machine and all unrequired ports blocked. The aproach does not differ because the server's assigned tasks differ. A server may require multiple ports open to multiple workstations where a workstation may require a few ports open only for the management server and admin's workstation. In both cases, the minimum is opened to the minimal required source address ranges. The suggested security aproach does not differ due to differences in class of machine. My thinking is that for the security aproach to differ, one would have to perscribe to "deny all, allow minimum required" for the external webserver and "allow all, deny only those proven hostile" for the internal webserver. It's a difference of how the admin aproaches security versus the minimum requirnments of the server based on it's tasks. In all cases above, the attack surface is minimized to only what is requied by the respective machine through the same security aproach. This thinking is based on my hardening internal servers as much as hardening external servers. Assume the traffice inside is as hostile and arbitrary as the traffic outside. Even the internal workstations get a firewall with minimal ports open allowing only relevant address ranges.