At tonight's meeting of the Pacific NorthWest SQL Server User's Group (a.k.a. PNWSQL) we had Craig Freedman from the Query Execution Team at Microsoft talk. Also, as a special treat, Scott Stauffer (a.k.a. The Social Director) came down from Canada for the meeting. Scott recently started a PASS chapter in Vancouver, BC and decided to come visit and see what life was like “on campus”. Our speaker, Craig, did a talk at Scott's chapter just a month or two ago, but Scott said he still expected to get something new from tonight.
One new item is that the word on the street is that SP4 for SQL 2000 is “just around the corner”. And so, reportedly, is SP2 for SQL Reporting Services.
But now, on to Query Execution. I was particularly interested in this talk tonight because reading execution plans has been one of my less-strong suits. Oh sure I understood that an Index Seek was better than a Table Scan, but I couldn't tell you what a Lazy Spool was used for or if it was good or bad (turns out that “it depends”). For more details, refer to Books Online for the Execution Plan pane. Here are a few notes to whet your appetite:
- A common mistake is to assume that the estimated "cost" figures such as I/O Cost or CPU Cost equate somehow to real time values like milliseconds. They don't. They are internal optimizer numbers and about all you can do is compare higher and lower values with each other. They do NOT correlate to actual time intervals.
- There is both a graphics version and a text version of the execution plan. They show almost identical data, just in different formats. The graphical is easier to get a big picture view while the text may be easier to search or you can save it to disk and run diff comparisons or other searching.
- One item that is different between the graphical and the text versions of the execution plan is that the graphic version will highlight if the system is missing statistics on a table.
- There are approximately 40 different iterators (the basic units such as Index Seek, Lazy Spool, etc.). They can be combined in many different ways in order to achieve the desired results. But only a limited set of the iterators support dynamic cursors. If you are using a dynamic cursor, you are limiting the possible efficiency because so many iterators are not available. In fact in some cases a dynamic cursor just cannot be broken down properly and instead a keyset cursor is substituted. (Note that at SQLTeam we strongly discourage the use of any cursors. Nigel has gone so far as to state that cursors are never necessary.)
- Nested Loop Joins: These are the basic algorithm for joining. Essentially it gets a row from the left input and then gets all the associate rows from the right input, and goes to the next row for the left input. The number of executes on the right input is equal to the number of rows in the left input. Note that the left and right inputs are not necessarily the tables as you have written them in your SQL JOIN clause. The optimizer may flip left-for-right if it determines that is more efficient. Nested Loop Joins generally perform best for small left sets. This is the only type of join iterator that handles inequality predicates (e.g. FROM t1 JOIN t2 on t1.col1 > t2.col4)
- Merge Join: There must be at least one equijoin for this to work. Data must be sorted on join keys either by an existing index or by a Sort iterator. Performs well for larger input sets, especially if data is already sorted by an index.
- Hash Join: There must be at least one equijoin. This iterator gets all rows from the left input and puts them in memory then processes rows on the right input, matching as it goes. This does not require a sort-order. Because a Hash Join puts the data from the left input into memory, this can consume a lot of memory, and if it runs out, it will spill to disk, slowing things down. This is also known as a Stop-and-Go iterator because the data flow stops at this iterator while it is loading the hash table in memory. By contrast, the Nested Loop Join allows the data to keep flowing continuously.
There was a lot more covered, but that's a good start for you. One of the closing comments by Craig was that you should use Optimizer hints only as a last resort. They override the optimizer doing its job and if your data changes shape down the road, you may be causing a less-optimum method be used. Craig strongly suggests looking for other ways to induce the optimizer to perform better such as building appropriate indexes and having up-to-date statistics on your files.
Thanks, Craig, for helping each of us learn a little bit more. By the way, nice effects on the slides where you are explaining the join iterators. Those were great, clear explanations!
posted @ Wednesday, November 10, 2004 10:54 PM