The database server is the heart of information storage for the enterprise, so you must take special care in managing this critical resource. When a problem arises, you must address it quickly, efficiently, and correctly, or an organization stands to suffer critical losses. In this Daily Drill Down, I will provide some tips on maintaining and troubleshooting your Microsoft SQL Server 2000 installation.
Backups are too important to take lightly
If there was anything in any of the environments in which I have worked that I might have taken lightly, backups were not it. In my opinion, backups are second in importance only to the actual data itself. As a result of this mindset, I am adamant about making sure that backups are done properly and on a regular schedule.
In my current environment, I have provided my team with clearly labeled tapes and a log to track when backup tapes were changed and when certain backup tapes were taken to a secondary storage facility. Going out of business because of a fire or other disaster that wipes out electronic records is not an option.
While maintaining good backups should be an integral part of any IT department’s daily routine, special care needs to be paid to backing up Microsoft SQL databases. If you attempt to directly back up an open SQL Server database, you will be seriously disappointed, because the backup database will be corrupt and unusable. This doesn’t mean you can’t back it up; the procedure for doing so is just a little different for this than for other files. There are three common methods for backing up a SQL database:
- Use a SQL backup agent with your backup software. Most enterprise backup software, including Computer Associates’ ArcServe and Veritas’ BackupExec, have an option that allows online backups of SQL Server databases. The advantage of using this type of option is that you never need to take your SQL Server down to back it up. The backup agents are quite reliable.
- Run a SQL-based database backup and back up the output files. Backups of databases need not be initiated from the operating system. SQL Server 2000 includes utilities to dump databases out to flat files, which are then closed upon completion of the backup job. This makes them accessible by the operating system’s backup software.
- Shut down SQL Server before backing up the database. In my opinion, this is the least desirable option because it results in the database being unusable for the duration of the backup. The easiest way to handle this is to write a batch file that shuts down the SQL Server services and runs automatically via the Windows Scheduler.
I prefer a combination of the first two methods. I do make use of backup agents, but I believe that using the SQL Server backup method as a second level of protection is a good practice. Simply backing up your data isn’t enough, though. You also need to periodically test your backup to make sure it’s actually working. Nothing is worse than attempting to restore from backup after a failure only to find that the backup has no data.
Learn to kill processes
For years, UNIX administrators have had the ability to stop running processes when they are hung or running away. While Windows has a limited version of this ability using Task Manager, SQL Server provides a much more UNIX-like capability.
You may be asking why this ability would be of use in SQL Server. Imagine that a database administrator runs an inefficient query against a massive database, using up CPU resources and slowing access to the database for everyone else—including your customers. Or perhaps your manager has indicated the need for a specific report immediately, but you aren’t able to gather the information because another user has the table locked. In situations like these, you need a way to quickly stop the problem. SQL Server’s KILL command allows you to do just that.
There are two ways to kill a running process. For either method, you need to know the process ID that you wish to terminate. The first method is the most intuitive and involves using SQL Enterprise Manager; the second method is executed using Query Analyzer.
The Enterprise Manager method
To use the Enterprise Manager method, you must actually be running Enterprise Manager. In this utility, choose the Current Activity tab from the Management folder for the server you wish to work with. Figure A shows you where this is located.
|The Current Activity tab helps you to manage your server.|
There are three options under the Current Activity tab, and each serves a specific purpose:
- Process Info—This shows a list of all current SQL Server processes along with their status, the database that the service is affecting, and which user owns the process.
- Locks/Process ID—If there are any processes with locks on them, this option will show them to you. They will be listed by process.
- Locks/Object—This option also shows locks, but it lists any databases that have locks. The details for each entry show which process has a lock on the database. Use this option when you need to troubleshoot a locking problem on a specific database.
Once you have identified which process you wish to terminate, you can do so by double-clicking the process ID and choosing Properties from the shortcut menu. You’ll then see the Process Details screen shown in Figure B. One of the buttons on this page is Kill Process, which does exactly as you would expect—it kills the process you’ve selected.
|Click Kill Process, and the selected process will die.|
The Query Analyzer method
Using Query Analyzer, you first need to get a list of running SQL processes, which you can do by executing the command sp_who in Query Analyzer. When the query completes, you’ll see the screen shown in Figure C. You can terminate one of these processes by typing kill processid in the upper pane and pressing [Enter]. In this command, processid represents the value of the process in the SPID column.
|The sp_who command returns the list of processes.|
Learn to use the Database Consistency Checker
The Database Consistency Checker (DBCC) finds errors in, and maintains the databases on, your system. When you receive corruption errors in a table or database, or when a query returns results you know to be incorrect, use DBCC to find and possibly fix the problem.
All DBCC commands are initiated from Query Analyzer. There are many different commands that can be fed to DBCC, but I won’t go over all of them in this article. You can find more information about the DBCC commands in SQL Books Online. However, two of the most important DBCC commands are:
- checkdb—The DBCC checkdb command will run a consistency check on all the databases on your SQL Server and tell you how many rows are in use and whether any errors were detected.
- checktable—If you’re experiencing problems with a specific table on your system, use the checktable command to see if you can pinpoint the problem. To use this command, you first need to tell Query Analyzer which database to use. For example, if you want to verify the Categories table in a database called TPG, you would type:
dbcc checktable (‘Categories’)
Correcting Error 1105 conditions
SQL Server error messages can sometimes be intimidating. But you shouldn’t panic over them. One of the most common errors you’ll see when using SQL Server is:
Can’t allocate space for object syslogs in database dbname because the logsegment is full. If you ran out of space in syslogs, dump the transaction log. Otherwise use ALTER DATABASE or sp_extendsegment to increase the size of the segment.
The above error message may be accompanied by the Error 1105 status code, which basically indicates that a log file is full. While it doesn’t sound like a serious problem, this situation will prevent updates and insertions to the database, including check-pointing.
There are many possible causes for this problem, but the end result is the same—SQL considers the transaction log files full. These logs are supposed to truncate automatically. Some reasons that a log would become full are:
- Large bulk transaction has the transactional log locked—Each single SQL statement is considered a separate entity as far as the transaction log is concerned. This also goes for bulk transactions in which the logs cannot be truncated until the entire bulk operation is completed. If you didn’t reserve enough space in the transaction log and it fills up during this period, the operation will fail and you’ll be unable to perform further operations on the database until the problem is corrected.
- Uncommitted transactions—These can be caused by the problem above (a bulk operation gone awry), by a failed network connection, or by something as simple as poor design in the program.
You can correct the problems related to failed transactions using Query Analyzer and the DUMP TRANSACTION command. The DUMP TRANSACTION command allows the transaction log to be truncated, thereby freeing up the database for updates. In certain situations, this cannot be accomplished because DUMP TRANSACTION itself requires a log entry to be written. You can correct this problem by using the DUMP TRANSACTION WITH THE NO_LOG command, which will not log the dump. Remember to be careful with these commands because they can result in the loss of data.