Performance and Concurrency

Performance and concurrency go hand-in-hand.  In fact, these are more closely tied that many DBAs and developers realize.  I’ll underscore the important relationship between performance and concurrency in this last article of my concurrency series and explain why performance tuning can fix blocking problems in addition to improving response times.

Why Performance Affects Concurrency

A bad query execution plan touches much more data than necessary and also consumes inordinate CPU and disk resources.  The result is that more locks are acquired than needed and these are held for longer durations when concurrent queries complete for conflicting locks, CPU and disk resources.  This can lead to a downward spiral of hardware bottlenecks, long-duration locks, blocking and deadlocks.  I’ve seen some companies even throw hardware at such a problem rather than address the underlying cause.  Although faster hardware is one way to reduce lock duration deadlock likelihood and improve overall performance, this is most certainly not the best approach. 

Keep in mind that unnecessarily high resource utilization manifests itself as a performance problem only in severe cases.  An occasional query that scans thousands of rows when only a few are needed might barely be noticeable in Performance Monitor.  However, the amount of rows touched greatly increases the likelihood that the query will block (or be blocked by) other concurrent queries.  It is often only after the problem escalates into a severe blocking incident that a DBA becomes aware that the latent problem exists.

Tuning addresses blocking and deadlocking problems for a number of reasons.  Most importantly, index tuning provides an efficient path to data so minimal data needs to be locked while minimal CPU and disk resources are used.  Query tuning ensures that queries are formulated to achieve the desired result efficiently and that expressions are sargable.  Best Practices in server configuration and maintenance, such as data and log files on different drives and keeping stats up-to-date, help transactions and queries run as quickly as possible.  Well-tuned databases consume minimal hardware resources so more queries can be processed concurrently and short-term blocking is not even noticeable.

I strongly advocate examining query execution plans (ctrl-L in SSMS) before promoting stored procedures and queries to production.  Check to ensure that scans are done only when appropriate for the task at hand and indexes are used efficiently (seeks).  Even if query response time is adequate, a suboptimal plan can lead to avoidable concurrency problems.  Proactively examining execution plans is a great way to maximize both performance and concurrency.

posted @ Saturday, June 28, 2008 9:15 AM

Print

Comments on this entry:

# re: Performance and Concurrency

Left by JackV at 6/30/2008 2:57 AM
Gravatar
Is there a way of generically capturing sql statements that are causing deadlocks in SQL Server? I'm talking about at a SQL Server level, as opposed to putting code changes in SQL Server

# re: Performance and Concurrency

Left by Dan Guzman at 6/30/2008 9:23 PM
Gravatar
Hi, Jack.

There are a couple of ways you can capture the sql statements involved in a deadlock along with resource information. One method is to turn on trace flags 1204 (deadlock info) and 3605 (print to error log):

DBCC TRACEON(1204, 3605, -1)

The error log will then include diagnostic information following a deadlock condition.

Another method is to run a trace of the Locks:Deadlock Graph event. This will generate deadlock-list XML that you can view graphically in Profiler.
Comments have been closed on this topic.