Tuesday, May 07, 2013
A few weeks ago, AtlantaMDF offered scholarships for each of our upcoming Pre-conference sessions at SQL Saturday #220. We would like to congratulate the winners!
Thanks to everyone who applied! And once again we must thank Idera's generous sponsorship, and the time and effort made by Bobby Dimmick (w|t) and Brian Kelley (w|t) of Midlands PASS for judging all the applicants.
Don't forget, there's still time to attend the Pre-Cons on May 17, 2013! Click on the EventBrite links for more details and to register!
Monday, April 22, 2013
AtlantaMDF and Idera are teaming up to find a few good people. If you are:
- A student looking to work in the database or business intelligence fields
- A database professional who is between jobs or wants a better one
- A developer looking to step up to something new
- On a limited budget and can’t afford professional SQL Server training
- Able to attend training from 9 to 5 on May 17, 2013
AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con!
One Recipient Each will Attend:
If you are interested in attending these pre-cons send an email by April 30, 2013 to AtlantaMDFPrecons@gmail.com and tell us:
- Why you are a good candidate to receive this scholarship
- Which sessions you’d like to attend, and why (list multiple sessions in order of preference)
- What the session will teach you and how it will help you achieve your goals
The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013.
P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons!
You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx.
View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx.
* Registration charges a $10 fee to cover lunch expenses.
Monday, April 01, 2013
Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition. And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive! It almost makes you want to go with Oracle. That, and a desire to have Larry Ellison do things to your orifices.
And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions. Alas…they don't…officially anyway. Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions!
It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement:
IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%'
If that statement returns true, it fails. (the print statements are just placeholders) Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything).
Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror. Surprisingly, it's not actually implemented in SQL Server! Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives.
The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names. You can create them using the MKLINK command in a command prompt:
mklink /D D:\SkyDrive\Data D:\Data
mklink /D D:\SkyDrive\Log D:\Log
This creates a symbolic link from my data and log folders to my Skydrive folder. Any file saved in either location will instantly appear in the other. And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course).
So what does this have to do with database mirroring? Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link. Although it doesn't actually use Skydrive, it performs the same function. So in effect, the following statement:
ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022'
Is turned into:
mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$"
The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers. I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality.
To wrap this up, all you have to do is the following:
- Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples)
- On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data
- On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data
- Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log)
- Viola! Your databases are kept in sync on multiple servers!
One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either. So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline. It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency.
I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.
Wednesday, March 20, 2013
If you love SQL Server training and are near the Atlanta area, or just love us so much you're willing to travel here, please come join us for:
SQL SATURDAY #220!
The main event is Saturday, May 18. The event is free, with a $10.00 lunch fee. The main page has more details here: http://www.sqlsaturday.com/220/eventhome.aspx
We are also offering pre-conference sessions on Friday, May 17, by 5 world-renowned presenters:
We have an early bird registration price of $119 until noon EST Friday, March 22. After that the price goes to $149, a STEAL when you compare it to the PASS Summit price. :)
Please click on the links to register and for more information. You can also follow the hash tag #SQLSatATL on Twitter for more news about this event.
Can't wait to see you all there!
Monday, September 10, 2012
Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK! Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University.
On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t). While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible. But not to worry, I caught up with them during the regular sessions and at the speaker dinner. And I got back the money they all owed me. (Actually I owed Mark some money)
The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook. Hint: it's not Buck himself.
Amazingly, (and against all better judgment, ;) they allowed me to present at SQL Saturday 162! I did a 5 minute Lightning Talk on Regular Expressions in SSMS. I then did a regular 50 minute session on Constraints. You can download the content for the regular session at that link, and for the regular expression presentation here.
I had a great time and had a great audience for both of my sessions. You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space. The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end.
Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow! Well done!
Tuesday, September 04, 2012
Welcome everyone to T-SQL Tuesday Episode 34! When last we tuned in, Mike Fal (b|t) hosted Trick Shots. These highlighted techniques or tricks that you figured out on your own which helped you understand SQL Server better.
This month, I'm asking you to look back this past week, year, century, or hour...to a time when you COULDN'T figure it out. When you were stuck on a SQL Server problem and you had to seek help.
In the beginning...
SQL Server has changed a lot since I started with it. <Cranky Old Guy> Back in my day, Books Online was neither. There were no blogs. Google was the third-place search site. There were perhaps two or three community forums where you could ask questions. (Besides the Microsoft newsgroups...which you had to access with Usenet. And endure the wrath of...Celko.) Your "training" was reading a book, made from real dead trees, that you bought from your choice of brick-and-mortar bookstore. And except for your local user groups, there were no conferences, seminars, SQL Saturdays, or any online video hookups where you could interact with a person. You'd have to call Microsoft Support...on the phone...a LANDLINE phone. And none of this "SQL Family" business!</Cranky Old Guy>
Even now, with all these excellent resources available, it's still daunting for a beginner to seek help for SQL Server. The product is roughly 1247.4523 times larger than it was 15 years ago, and it's simply impossible to know everything about it.* So whether you are a beginner, or a seasoned pro of over a decade's experience, what do you do when you need help on SQL Server?
That's so meta...
In the spirit of offering help, here are some suggestions for your topic:
- Tell us about a person or SQL Server community who have been helpful to you. It can be about a technical problem, or not, e.g. someone who volunteered for your local SQL Saturday. Sing their praises! Let the world know who they are!
- Do you have any tricks for using Books Online? Do you use the locally installed product, or are you completely online with BOL/MSDN/Technet, and why?
- If you've been using SQL Server for over 10 years, how has your help-seeking changed? Are you using Twitter, StackOverflow, MSDN Forums, or another resource that didn't exist when you started? What made you switch?
- Do you spend more time helping others than seeking help? What motivates you to help, and how do you contribute?
- Structure your post along the lyrics to The Beatles song Help! Audio or video renditions are particularly welcome! Lyrics must include reference to SQL Server terminology or community, and performances must be in your voice or include you playing an instrument.
These are just suggestions, you are free to write whatever you like. Bonus points if you can incorporate ALL of these into a single post. (Or you can do multiple posts, we're flexible like that.) Help us help others by showing how others helped you!
Legalese, Your Rights, Yada yada...
If you would like to participate in T-SQL Tuesday please be sure to follow the rules below:
- Your blog post must be published between Tuesday, September 11, 2012 00:00:00 GMT and Wednesday, September 12, 2012 00:00:00 GMT.
- Include the T-SQL Tuesday logo (above) and hyperlink it back to this post.
- If you don’t see your post in trackbacks, add the link to the comments below.
If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag. I can be contacted there as @sql_r, in case you have questions or problems with comments/trackback. I'll have a follow-up post listing all the contributions as soon as I can.
Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!
Tuesday, August 14, 2012
Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots. I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year. Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept. I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques. It's an itch I get every few months, and, well, it sure beats workin'.
I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance. It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it.
Although being an "evil genius" can help you learn some things they haven't told you about. ;)
This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works. It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals. I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it. Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files.
Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information. And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.)
Once you download and install Strings.exe you can run it from the command line. For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM):
cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn"
C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt
C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt
I've limited myself to DLLs and EXEs that have "sql" in their names. There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!)
If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings. You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size.
And when you do open it…you'll find…a TON of gibberish. (If you think that's bad, just try opening the raw DLL or EXE file in Notepad. And by the way, don't do this in production, or even on a running instance of SQL Server.) Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there. As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting. I'm boring like that.
Sometimes though, having these files available can lead to some incredible learning experiences. For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons. He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne". Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated.
Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts. Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed. As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons. And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons.
One reason completely stumped me: NoParallelHekatonPlan. What the heck is a hekaton? Google and Wikipedia were vague, and the top results were not in English. I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure. I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton. Here's what I found:
Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path
The reason why Hekaton parameter passing code took the slow code path
Interesting! The first 4 entries (in red) mention parameters and "slow code". Could this be the foundation of the mythical DBCC RUNFASTER command? Have I been passing my parameters the slow way all this time?
And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance? Are these special, super-undocumented DIB (databases in black)?
I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions:
SELECT name FROM sys.all_objects
WHERE name LIKE '%hekaton%'
SELECT name FROM sys.all_objects
WHERE object_definition(OBJECT_ID) LIKE '%hekaton%'
(0 row(s) affected)
(3 row(s) affected)
Hmm. Well that didn't find much. Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil?
(No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading. I know I'd fall asleep without it.)
OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton":
-- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions
-- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables
OK, that's interesting, let's go looking down a little further:
((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table
Wellllll, that tells us a few new things:
- There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!)
They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post.
- The OBJECTPROPERTY function has an undocumented TableIsInMemory option
Let's check out sp_recompile:
-- (3) Must not be a Hekaton procedure.
And once again go a little further:
if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND
ObjectProperty(@objid, 'IsInlineFunction') = 0 AND
ObjectProperty(@objid, 'IsView') = 0 AND
-- Hekaton procedure cannot be recompiled
-- Make them go through schema version bumping branch, which will fail
ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)
And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc. (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.)
This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats. But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for:
SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects
WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'
I'll leave that to you as more homework. I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features. That seems to be really good advice!
Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server! Does this mean we can access the source code for SQL Server? As The Oracle suggested to Neo, can we return to The Source???
Well, maybe a little bit. While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server. And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones. Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!) And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else. It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring!
So what did we learn? We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server. We've even learned how to use command-line utilities! We are now 1337 h4X0rz! (Not really. I hate that leetspeak crap.)
Although, I must confess I might've gone too far with the "conspiracy" part of this post. I apologize for that, it's just my overactive imagination. There's really no hidden agenda or conspiracy regarding SQL Server internals. It's not The Matrix. It's not like you'd find anything like that in there:
Attach Matrix Database
Alright, enough of this paranoid ranting! Microsoft are not really evil! It's not like they're The Borg from Star Trek:
ALTER FEDERATION DROP
ALTER FEDERATION SPLIT
Monday, July 16, 2012
I have a few speaking engagements coming up in July. I will be getting my Revenge on twice this week, first at the Steel City SQL User Group in Birmingham, Alabama July 17, 2012:
New Horizon Computer Learning Center
601 Beacon Pkwy. West, Suite 106
Birmingham, AL 35209
6-8 pm CST
Not content with that, with my hands behind my back, I will pull the same thing from my hat at SQL Saturday 122 in Louisville, KY on July 21, 2012: Schedule Register
These include Revenge: The SQL Parts 1 AND 2! New and improved with the new Office 2013 Preview! (Ummm, not really).
I will then Tame Unruly Data at SQL Saturday 126 in Indianapolis, IN in July 28, 2012: Schedule Register
If you will be in any of those places at those times, and I owe you money, that would be the best time for you to collect. Just make sure not to warn me first, otherwise I may not show.
Tuesday, June 12, 2012
This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later).
This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet). CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier. It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it. In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session.
While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions. If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be. CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back": HA-HA, I GOT YOU! Since GO starts a new batch all variable declarations are lost.
Here's a simple example I recently used at work. I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput. I also needed to log how long it took for the script to run and include the mirror settings for the database in question. I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable. My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not).
Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used. As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process. The code is pretty small:
declare @time binary(128)=cast(getdate() as binary(8))
set context_info @time
... rest of Big Adventure code ...
select @@servername, mirroring_role_desc, mirroring_partner_instance,
db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc,
cast(cast(context_info() as binary(8)) as datetime),
datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate())
from sys.database_mirroring where db_name(database_id) like 'Adv%';
I declared @time as a binary(128) since CONTEXT_INFO is defined that way. I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00. To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion. It's not the safest way perhaps, but it works on my machine. :)
As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO. With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious how long certain parts take. In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor. (His code is in UPPERCASE as it was in the original, mine is all lowercase):
declare @msg binary(128)
set @msg=cast('Altering bigProduct.ProductID' as binary(128))
set context_info @msg
ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL
set context_info 0x0
declare @msg1 binary(128)
set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128))
set context_info @msg1
ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID)
set context_info 0x0
declare @msg2 binary(128)
set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128))
set context_info @msg2
ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL
set context_info 0x0
declare @msg3 binary(128)
set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128))
set context_info @msg3
ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID)
set context_info 0x0
declare @msg4 binary(128)
set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128))
set context_info @msg4
CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate
ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost)
set context_info 0x0
This doesn't include the entire script, only those portions that altered a table or created an index.
One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression. And since GO starts a new batch I need to declare a variable in each one. And of course I have to use CAST because it won't implicitly convert varchar to binary. And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes. And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work.
So what does all this aggravation get you? As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want):
select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51
Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages. You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).
Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction. All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation. If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables. That would let you track how long each statement ran without having to run Profiler.
Saturday, March 31, 2012
We've just finished up a fantastic event at SQLBits X
in London! If you've never been to SQLBits and you can make it to the UK, I highly recommend it. If you didn't attend, here's what you missed
Meanwhile, for those who attended the Lightning Talk sessions and were disappointed that I ran out of time, here's the last part that you would have seen:
/* How to Lose Friends and Irritate People...With Unicode!
SQLBits X - London - March 31, 2012
-- some sexy SQL
DECLARE @oohbaby TABLE(i INT NOT NULL UNIQUE,
uni_char AS NCHAR(i),
hex AS CAST(i AS BINARY(2)))
INSERT @oohbaby VALUES(664),(1022),(1023),(1120),(1150),(8857),(11609),(42420),(42427)
-- change results font to larger size, some only work in grid font
SELECT * FROM @oohbaby
SELECT NCHAR(1022) + NCHAR(1023) AS Page3Girl
It's probably better that you run this yourself, in the privacy of your own home/office, you know *wink* *wink* *nudge* *nudge* *say no more*