Implementing group policies can enhance your network’s security; however, that security can cause lower overall system performance. But when you’re smart about implementing group policies, you can have your security and your performance too. There are quite a few common mistakes people make when implementing group policies that have a negative impact on system performance. Here, I will guide you through some of these mistakes and how to correct them.
Use a minimum number of group policy objects
A user or computer’s effective group policy isn’t simply defined in one place. Instead, it's composed of a collection of group policy objects (GPOs). The idea is that you can have GPOs at different layers of the Active Directory (AD). For example, you can have a local GPO that governs the local computer, and you can apply GPOs to organizational units or to a domain as a whole.
With this hierarchical approach to group policies, you can custom-tailor a group policy based on a user’s needs. You could use the local security policy to establish the computer portion of the group policy and then use the domain security policy to establish the more general settings, such as password requirements. Finally, depending on the organizational unit that the user belongs to and the various security groups that the user belongs to, you can use other group policy objects to fill in the remaining settings.
Although this technique is designed to help you assign each user the effective group policy he or she needs, it has a couple of drawbacks. First, using multiple group policy objects can be confusing. If you have a problem with the effective group policy, how do you know which GPO is causing the problem? With so many GPOs working together, you’ll also have to worry about contradictory settings.
Perhaps the biggest problem with taking a hierarchical approach to group policies is efficiency. It can take a long time, relatively speaking, for Windows to read through all of the GPOs that apply to a user and calculate the effective group policy based on those objects. This causes two basic problems.
Because a user’s login process can’t be completed until Windows 2000 has compiled an effective group policy for the user, the process of calculating the group policy can drastically slow down the login process. If Windows takes a long time to compile the effective group policy, the user must wait a long time to log in.
Also, if the server is busy compiling someone’s effective group policy, then it can't do other things. Sure, Windows 2000 can multitask, but the act of multitasking simply assigns various processes different amounts of processor time. Most of the processes don’t actually run simultaneously. They just give the illusion of doing so. So when the server is tied up calculating effective group policies every time someone logs in, other possibly more important processes are being put on hold.
The lesson here is simple: To maximize your server efficiency, use the fewest number of GPOs to build your effective group policies.
Disable unused objects
GPOs can contain hundreds of settings, but for most networks, the vast majority of these settings are never used. However, unless a group policy setting is disabled, Windows will process the setting anyway. So, to increase efficiency, make sure you disable any unused settings within GPOs.
Also, GPOs are designed to stack on top of one another. If a setting is applied in a high-level GPO, the setting will still be effective even after the lower-level GPOs have been applied, unless the setting is blocked or changed somewhere along the way. So there’s generally no reason to apply the setting more than once. I’ve had administrators tell me that they enforce things like password security in every GPO in fear that somehow the setting won’t be included in the effective group policy if they only assign it once. But if you assign a setting like that 15 times, Windows will have to process that setting 15 times at login.
Cross-domain group policies
For efficiency's sake, avoid cross-domain group policies. Sometimes it may be difficult to get around compiling an effective group policy that’s based on GPOs found in multiple domains. However, doing so is an extremely bad practice. Cross-domain group policies take a long time to load because of the difficulties involved when one domain controller accesses a domain controller in a different domain.
There are the seemingly endless security checks the two domain controllers must go through. Depending on the design of your network, when the two domain controllers finally agree to share information, you have to deal with the possibility that the two domains may be separated by a slow WAN link.
From an administrative standpoint, it may seem easier to keep all of your GPOs in a single domain, but if you redistribute them to the domains that they affect, you’ll see a big boost in network performance.
Take slow links into account
In the section above, I mentioned how two domain controllers might have to communicate across a slow WAN link. However, cross-domain operations aren’t the only time that you’ll have to worry about slow links. In Windows 2000, you can have slow links within a single domain. Although building a single domain that contains slow links in this way isn’t necessarily a bad practice, having slow links within your domain can cause some serious inefficiencies with your group policies.
Slow links can cause big problems, because each domain controller within a domain contains at least a partial copy of AD. If an administrator were to make a change to AD, such as updating a group policy, the change would be applied to the first domain controller that responds to the request. This domain controller could reside anywhere in the entire domain. However, once the change has been applied, the copies of AD that the other domain controllers contain are now out of sync. The updated domain controller sends a message to the other domain controllers to tell them a change has occurred. The other domain controllers then request that the updated domain controllers send a copy of the change so that they can update their copies of AD.
The same principle holds true when a client tries to read an AD object, such as a GPO. The client reads the object from the first domain controller to respond to the request, regardless of its physical location. This means that it’s theoretically possible for a client to read group policy information from a domain controller that’s on the other side of a WAN link, making for a very slow login process and congesting the link with unnecessary network traffic.
The solution is to divide the domain into sites. Sites are an AD mechanism that forces Windows 2000 to acknowledge slow network links. When you divide a domain into sites, the domain still functions as a single entity. The difference is that the domain controllers, member servers, and clients become aware of their geographic position within the domain.
Suppose for instance that a small company had two buildings that were right next door to each other. It might be overkill to create two different domains, so the company could use a single domain but place each of the two buildings within a site. By doing so, each machine within a site would assume it would have to communicate across a slow link to access machines in the other site. Because communicating across a slow WAN link isn’t usually a desirable operation, the machines are designed to limit communications across the link whenever possible. This means that at login time, the clients would attempt to pull group policy information off of a domain controller that’s located within their same site before attempting to communicate across to the other site.
But how does dividing a network into sites affect the process of domain controller updates? Although the domain controllers are divided into sites, they are still a part of the same domain and must therefore continue to share AD updates. The site barrier simply slows the process down a bit. If an administrator updated a group policy, the site definitions would pretty much guarantee that the change would be applied to a domain controller in the same site as the administrator’s computer. The update would be replicated within the site in the normal manner.
The difference would be that one machine in each site would act as a bridgehead server. The bridgehead server would be responsible for collecting updates and sending them to the other site. Because this usually involves communicating across a slow link, the bridgehead server would wait awhile before sending the updates. Rather than repeatedly congesting the line with a lot of small updates, it would send a whole batch of updates. When the updates are sent to the other site, they arrive at the other site’s bridgehead server. It’s this server’s responsibility to distribute the changes within the remote site.
Remember that you don’t want each server in the remote site to be updated from across the slow link. Instead, only one set of changes is passed across the link, and the changes are distributed once they arrive at the remote bridgehead server. As you can see, dividing domains that contain slow links into sites can also boost efficiency.
Though it's easy to make some of these types of mistakes, you can recover system performance by making some simple tweaks and changes. Even a little planning and cleaning of your AD tree as part of your administrative duties will go a long way in terms of system efficiency.