The last thing you want to do is introduce instability when gathering useful performance and troubleshooting information via Profiler. However, I’ve found that many DBAs are not aware that Profiler and SQL Trace need to be used carefully in order to minimize overhead that can negatively affect overall SQL Server performance and lead to problems like query timeouts. I want to get the word out on how to use these invaluable tools with minimal server impact.
First, here’s a little background on SQL Trace architecture. SQL Trace runs in the SQL Server process to gather and filter traced events. A trace can either write directly to a file (server-side trace) or return data directly to an application like Profiler in near real-time. In both cases, traces are ultimately created and managed using documented stored procedures (sp_trace_create, sp_trace_setevent, sp_trace_setfilter and sp_trace_setstatus). However, instead of calling these stored procedures directly, SQL 2005 Profiler uses the SMO API which wraps these stored procedure calls and well as calls to an undocumented procedure to retrieve trace data.
When an event occurs inside of SQL Server, SQL Trace passes the event to all running traces that include the event. Each trace then filters the event as specified by the trace definition and queues the selected event data before either being written directly to a file or returned to the Profiler instance (or other SMO-based application).
Writing trace data directly to a file is very efficient since it's done directly by SQL Server with minimal overhead. The active trace file can be viewed using the fn_trace_gettable() table-valued function on the same instance as the trace. However, the trace needs to be stopped or a file rollover must occur before the trace data can be imported into Profiler or accessed from another SQL Server instance using fn_trace_gettable().
Profiler, on the other hand, retrieves and processes trace data in near real time. The drawback is that the interactive graphical display requires quite a bit more overhead than writing directly to a file and more resources are also needed to transfer the trace data from SQL Server to the application. This overhead isn’t typically noticeable unless the trace generates a lot of events or the server is low on resources. However, I’ve seen cases where a high-volume Profiler trace degrades overall server performance to the point of causing query timeouts and slow response time. SQL Server MVP Linchi Shea’s post Performance Impact: Profiler Tracing vs. Server Side SQL Tracing provides metrics that validate my experiences.
I routinely use server-side SQL Trace scripts instead of Profiler on production servers to minimize trace resource usage. Since it’s quite tedious to create a trace script from scratch, I use Profiler against a development server to create the desired trace and then script the trace from the Profiler menu (FileàExportàScript Trace Definition). I then change the script to specify the trace file path, max file size and rollover option. I finally run the script on the production server to start the trace and make note of the TraceID that will be needed to stop and delete the trace with sp_trace_setstatus. Function fn_trace_getinfo(DEFAULT) comes in handy if I forget the TraceID or want to list currently defined traces.
The Optimizing SQL Trace in the Books Online lists some SQL Trace performance guidelines. Here are some of those plus some of my own:
- Run Profiler remotely instead of directly on server
- Avoid including events that occur frequently (e.g. Lock:Acquired) unless absolutely needed
- Include only event classes needed
- Specify limiting filters to reduce the number of events
- Avoid redundant data (e.g. SQL:BatchStarting and SQL:BatchCompleted)
- Avoid running large traces with Profiler; consider a server-side SQL Trace instead
- Limit server-side trace file size and manage space usage
I hope these tips help you avoid introducing performance problems when using Profiler and SQL Trace.