Data Management

Optimize SQL Server queries with these advanced tuning techniques

The best way to tune performance is to try to write your queries in a number of different ways and compare their reads and execution plans. Here are various techniques that you can use to try to optimize your database queries.

Now that you know how to speed up your SQL Server database queries, you can start delving into some of the more advanced tuning options.

The best way to tune performance is to try to write your queries in a number of different ways and compare their reads and execution plans. Here are various techniques that you can use to try to optimize your database queries.

Use JOINs rather than subqueries

If possible (and if it makes sense), I suggest using JOIN statements rather than subqueries to improve performance. When a subquery is used as criteria in a SELECT statement, the values returned from the subquery are distinct. Returning a distinct list of values requires additional processing, which can slow down your queries.

Use explicit transactions

When data manipulation occurs in the database, the actions are written to the transaction log. If your statements are executing many DML statements, it might be a good idea to place them inside of a transaction for performance purposes. Placing the statements inside of a transaction will prevent all of the statements from being written to the transaction log serially. For example, the following statement takes approximately seven seconds to execute on my laptop:

CREATE InsertTable

(

      IDCol INT IDENTITY(1,1),

      ColVal INT

)

GO

DECLARE @Counter INT

SET @Counter = 1

WHILE @Counter < 15000

BEGIN

      INSERT INTO InsertTable(ColVal)

      SELECT DATEPART(ms, GETDATE())

      SET @Counter = @Counter + 1

END

If I wrap the INSERT statements inside of a transaction, it takes a little less than two seconds to execute. This is because the statements are inside of a transaction rather than committed to a transaction log until the transaction commits. This reduces the number of writes to the log.

DECLARE @Counter INT

SET @Counter = 1

BEGIN TRAN

WHILE @Counter < 15000

BEGIN

      INSERT INTO InsertTable(ColVal)

      SELECT DATEPART(ms, GETDATE())

      SET @Counter = @Counter + 1

END

COMMIT TRAN
Note: I advise you to use this approach with care. If there are too many statements inside a transaction, it will increase the duration of the transaction, which increases the amount of time locks are held on the objects inside of the transaction.

Use UNION ALL instead of UNION

When you use the UNION clause to concatenate the results from two or more SELECT statements, duplicate records are removed. This duplicate removal requires additional computing to accomplish. If you are not concerned that your results may include duplicate records, use the UNION ALL clause, which concatenates the full results from the SELECT statements.

Use EXISTS when possible

When you need to check for the presence of certain conditions, it is usually faster to use the EXISTS function over COUNT(*). This is because COUNT(*) has to scan all records returned by the statement, while EXISTS will return a true value as soon as it finds a record that meets the criteria.

STATISTICS IO

There are different ways to determine the best way to write your queries. Two of my favorite methods are looking at the number of logical reads produced by the query and looking at graphical execution plans provided by SQL Server Management Studio. For determining the number of logical reads, you can turn the STATISTICS IO option ON. Consider this query:

SET STATISTICS IO ON
SELECT * FROM SalesHistory

The following is returned in the Messages window in SQL Server Management Studio:

Table 'SalesHistory'. Scan count 1, logical reads 33, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

There are several bits of data returned by STATISTICS IO, but I am really only concerned with the logical reads portion because it will tell me the number of pages read from the data cache. This is the most helpful to me because it will stay constant when I run the same query, which is important because there are sometimes external factors that might vary the execution time of my queries, such as locking by other queries.

When I'm tuning my queries, my goal is to get the number of logical reads as low as possible. Fewer logical reads typically leads to faster execution times.

Fine tuning

In a future article, I will look at how you can use the graphical execution plans created by SQL Server to fine tune your queries.

Tim Chapman is a SQL Server database administrator and consultant who works for a bank in Louisville, KY. Tim has more than eight years of IT experience, and he is a Microsoft certified Database Developer and Administrator. If you would like to contact Tim, please e-mail him at chapman.tim@gmail.com.

————————————————————————————————————————————-

Get SQL tips in your inbox

TechRepublic's free SQL Server newsletter, delivered each Tuesday, contains hands-on tips that will help you become more adept with this powerful relational database management system. Automatically subscribe today!

About

Tim Chapman is a SQL Server MVP, a database architect, and an administrator who works as an independent consultant in Raleigh, NC, and has more than nine years of IT experience.

Editor's Picks