A common misconception among IT pros is that “locks are bad,” and you must do everything possible to ensure database locking doesn’t prevent processes from running. In order to ensure a consistent database environment, the database engine must use a mechanism to acquire exclusive use of a resource when that resource is being modified.
SQL Server uses locks to achieve this consistency. Locks are objects that the database engine uses to ensure that only one thread can access a resource at a time. Without the use of locks, concurrent data modifications would be possible by separate processes, which could potentially leave the database in an inconsistent state. Locks are a good thing, but you should plan your applications in such a way to minimize the number of database locks involved.
Here are details about a stored procedure that enables you to diagnose your database locking issues.
Finding out what’s getting locked
A sluggish running system means that it is time for you to do some investigation. A great place to start your search is to determine the number and frequency of locks occurring on your system. If your environment is highly transactional, it is very common for separate applications to contend for resources, which will cause locks. The key to resolving these issues is having the ability to determine the resources that are being locked and the processes that are contending for resources.
The sp_lock system stored procedure is packaged with SQL Server 2000 and will give you insight into the locks that are happening on your system. This procedure returns much of its information from the syslockinfo in the master database, which is a system table that contains information on all granted, converting, and waiting lock requests.
Let’s take a look at executing the sp_lock procedure to see what information it provides:
On my system, here’s what the procedure returns. The information returned from sp_lock isn’t self explanatory and will require some investigation to get to the meaningful data. You have the luxury to copy the text of this stored procedure and create a new one that will give you a better explanation as to what is going on in the system. (I will only focus on the data returned by sp_lock.)
Looking at the results above, you see these fields: spid, dbid, objid, indid, type, resource, mode, and status. The spid is the process identification number, which identifies your connection to SQL Server. To find out which user is associated with that spid, execute the stored procedure sp_who and pass the spid as a parameter to the procedure. The dbid is the database the lock is occurring in; you can find it in the sysdatabases table in the master database. The objid field indicates what object the lock is occurring on in the database. To view this object, you can query the sysobjects table in the master database for that specific objid.
The single record produced in the screenshot above will not necessarily be indicative of what is transpiring in your production environment. When you run this procedure, you want to look for result sets of 500 to 1,000 or more. You will probably get a different result set each time you execute sp_lock because of the new locks that have been acquired and the older locks that have been released. If you find that a large number of records returned by sp_lock have the same spid, it is likely the process is inside of a large transaction, and these locks may begin to block other transactions from occurring.
When you notice a spid that has acquired a large number of database locks, it is helpful to determine what stored procedure or statement is being run. To do this, run the following DBCC command:
This DBCC command will return information regarding the statement running in the EventInfo field.
A solid starting point
A slow running system could be indicative of a high number of locks on your tables. These locks may be caused by processes such as: a user running a really long query on your system; a process that is a resource hog; or two critical processes that compete for the same resources and frequently deadlock.
Once you find the process that you think is slowing down your system, in most cases, your next step should simplys be to monitor the system. It’s not a good idea to kill a process because it has a large number of locks in the system, unless you are absolutely certain that nothing else is adversely affected; instead, you should think of ways to automate the analyzation of your locking situation. Another idea is to figure out a way to receive notification when the system locks reach a certain threshold during certain times of the day.
The more information you can gather about your system, the greater advantage you will have regarding its resolution.
Tim Chapman a SQL Server database administrator and consultant who works for a bank in Louisville, KY. Tim has more than eight years of IT experience, and he is a Microsoft certified Database Developer and Administrator. If you would like to contact Tim, please e-mail him at email@example.com.