One of my clients has applications that use MSDTC. We’re spinning up a SQL Server 2016 Availability Group to take advantage of the MSDTC support it introduced. Most of these distributed transactions take place across linked servers. And of course the new servers are in a new domain with a very limited trust with the old domain.
I found lots of articles and checklists for configuring MSDTC but none had everything I needed. This is the checklist I came up with to configure cross-domain MSDTC with a limited trust in place. (I don’t know what kind of trust it is. I just know I have to put accounts into Domain Local groups in order to use them across the trust. And it’s one way.)
I found LOTS of articles on configuring MSDTC. These are the ones I found to be the most comprehensive.
- Troubleshooting MSDTC Communication Checklist – This is one of the better checklists I found. Has a good list of required ports.
- Troubleshooting Problems with MSDTC – Another good resource
- DTCPing – A VERY useful utility to confirm basic connectivity.
- DTCTester – Utility to perform a distributed transaction.
There are the steps that future Bill will need to get this configured next time.
We were crossing a domain boundary and weren’t getting good NetBIOS name resolution. I had to put a host file entry on both servers pointing to each other with just the machine name. I eventually had to put a second name in for the Availability Group Listener. You need to be able to have a ping –a resolve the names on both sides using only the name and not a FQDN. And if you’re editing the hosts file don’t forget to run NOTEPAD as an Administrator.
Fix the Matching CIDs
I’m not sure exactly how this came to happen but both these machines had the same CID values for MSDTC. (Note: I still don’t really know what a CID is but I know they can’t have the same one – see the second document above.) The only way to change the CID value is reinstall MSDTC. If you’re building machines from images you’ll probably have this problem. You reinstall by running:
I suggest a reboot after each step. Until I rebooted I didn’t see the service installed.
Open the Ports
I had the following ports open in the firewall:
- Port 135 both ways (for RPC)
- The dynamic ports 49152-65535
MSDTC starts talking on 135 and then jumps to a dynamic port. I think the firewall people may have done something fancier but that’s what I told them.
I also had to:
- Enable the three inbound Windows Firewall rules for Distributed Transaction Coordinator
- Enable the one outbound Windows Firewall rule for Distributed Transaction Coordinator
Those rules exist but they were disabled by default in my environment.
Configure MSDTC Permissions
Using DCOMCNFG.EXE I had to enable the following permissions in MSDTC.
- Network DTC Access
- Allow Inbound
- Allow Outbound
- No Authentication Required – This one was a little frustrating. I’d prefer to have them authenticate but I haven’t worked on that enough yet.
There are screenshots of this all over the web that you’ve probably already found.
At this point you should be able to get DTCPing to work. You run it on both sides and they talk to each other simulating a transaction. It give good error messages in the application. It also writes a text file with more detail diagnostic logging. It’s a very handy utility. If you get any errors you’ll need to work through those.
This little command-line utility actually runs a distributed transaction. It creates a temporary table and and inserts a row into it. You’ll need to configure a 32-bit ODBC entry. Look for C:\Windows\SysWOW64\odbcad32.exe to make that entry. The 64-bit utility is in C:\Windows\System32 and has the same name. That’s some confusing naming right there. You’ll also need a SQL Server login and password. If you’re going to test against multiple machines your ODBC source should point to a hosts file entry. It makes testing much easier.
The latest release of Is It SQL is out and includes a variety of features. I’d like to start with some screen shots of previous features. First up is database mirroring. You can see the mirroring status of a database when looking at the databases for a particular server.
For each database it will show the role, whether it’s synchronized or not, and whether the safety is on or off (synchronous vs. asynchronous). It will also show the send and redo queue for the database.
If you sign up for the newsletter (on the Is It SQL page) it will send instructions on enabling the Enterprise features. That includes a page showing all mirrored databases across all servers you’re monitoring. That page includes a “Priority” column that brings any database with issues to the top. If it’s disconnected or has a send or redo backlog it will bring it to the top of the list. If you have instances under multiple names, for example static DNS entries pointing to mirroring partners, it only shows each database once. That gives you a quick way to see all mirrored databases across your enterprise.
All tables are sortable by clicking on their heading. So you can sort these pages by the send queue or redo queue or the priority value I calculate or the database size or server or any other column. The size of the log also moved to its own column so it’s sortable now.
All the data is polled every minute from the servers and then displayed back on the web page. The pages auto-refresh every minute. Some data is polled in real-time when you refresh the page. This is indicated with a cool little lightning bolt by the section.
The availability group monitoring is still pretty basic at this point. After you add the nodes to be monitored it discovers any availability groups and displays them all together on a page. The data comes from the AG DMV’s. It is nice to have them all in one place and as easy to access as a web page.
The list of servers was just showing the bytes per second read and written for disk I/O. I’ve expanded this to include the IOPS, average I/O size, and the average latency. It includes that for both reads and writes. 318ms reads. Yuck. I hope yours are better!
At the bottom of each list of servers there’s a summary. It shows the total disk I/O, batches per second, RAM used, data file size and log files size. If instances are included multiple times it only includes it once. This also works for various tags. For example, I’ve tagged servers based on their data center so it’s easy to see how much traffic each data center is taking.
And that’s where we are so far. It’s a handy little utility all in a single 12MB executable. It’s easy to run as a service so it’s always available. I encourage you to sign up for the newsletter. It includes information on enabling the Enterprise features, tips and tricks, and new releases. You can download it from the Is It SQL page.
Release 1.0.22 is out and available at the Is It SQL home page. The biggest feature here is database mirroring.
There are two ways to see database mirroring. The first is on a server page it will show the databases that are mirrored. There is also a global database mirroring page that shows all mirrored databases. There’s an added column named “priority” that highlights databases that aren’t connected and synchronized. It also shows shows databases that have send or receive queues. Since the column is sortable it should always show databases with “stuff” going on at the top.
Please note that you’ll need to sign up for the newsletter for instructions on how to enable the beta features. You can find the link on Is It SQL page.
Data that is pulled from servers when the page is refreshed is now identified with a little lightning bolt. Most data is cached every minute but some data refreshes with each page load. This helps identify which is which. The basic rule of thumb is that everything is cached except what has the lightning bolt.
Release 1.0.20 is out. You can download it from my corporate site at http://www.scalesql.com/isitsql/. This release adds a beta feature to allow you tag servers and then filter the list by tag.
Instructions for enabling beta features are available when you sign up for the newsletter. The newsletter is limited to Is It SQL features and releases.
The current release automatically tags servers based on their version. This release also tags servers based on their domain too. It allows you to enter user-defined tags for each server. Common tags I’ve been using include development, test, production, disaster recovery, physical location, virtual vs. physical, application support, reporting, SSAS, SSRS, etc.
When I first starting writing this tool and using it I mostly focused on the key production servers. While testing this feature I’ve gradually added more servers to it. It’s been very helpful keeping an eye on servers I rarely look at but seem to have weird issues (cough, dev boxes, cough). Tagging allows me to filter those out of my main list but still be able to find them when I need them.
You can download the latest build of Is It SQL from my consulting site. You can now view the databases on a server. And you can click on a column header to sort that column.
And no, my fantasy baseball team in 2015 didn’t do very well. Unfortunately this year is much the same.
Here are a few other improvements:
- The Active Task page for a server no longer displays tasks waiting on BROKER_RECEIVE_WAITFOR.
- The menu bar stays on top when scrolling down. And I’m really surprised how much this little change has made the whole application so much more usable.
- There’s better error reporting if you launch the application and the port is already in use. I mainly see this when I launch the application and it’s already running as a service.
- It prioritizes ODBC 13 over ODBC 11. And it will gracefully fail back to earlier drivers until it finds one it can use.
If you’re running as a service, just stop the service, copy over IsItSql.EXE, and restart the service. There’s never any installation needed. And my email is at the bottom of the README if you have any questions or concerns.
This release includes a few minor bug fixes and smaller enhancements. Thanks to everyone that tried it out and reported back. At the bottom of the README.html I include my email address if you’d like to provide feedback. I’d love to hear what you like or what needs work.
One of my goals for this application was to know if a server was unreachable or it returned an error running any of the polling queries. These servers show up in red in the list on the main page. They are also displayed at the top of every page in a nice red box. Previously some pages didn’t display the servers with errors at the top. One page even displayed them twice. I guess it really, really wanted you to fix whatever the issue was.
I also excluded sessions running a WAITFOR command from the active sessions that are displayed when you look at the detail for a server. A number of system SPIDs seem to use it. And it seems a bit of my code does too. I’m finding I don’t want to see it more than I want to see it. I’m not sure this is the right answer and I’ll revisit it one of these days.
The database size was a simple sum of sys.master_files which included snapshots. Which ballooned the size reported if you had a lot of snapshots. For now I’ve excluded snapshots from the size total. I’m hoping in the next release to add back the actual size on the disk from the snapshot. That’s what I want to see anyway.
You might not have noticed but the columns in the main page are sortable. Just click on a header and they’ll sort. They also remember their sort order across refreshes. I changed a few of the columns to sort in descending order by default. So CPU percentage and database size will show the largest values first. I also need some type of icon indicating that these columns sort and what the order is but that’s going to be a future enhancement.
That’s it for this release. The install should be as simple as stopping the service, copying over the executable and restarting the service. If you have any issues, email me at the address at the bottom of the README. You can download the latest build at www.scalesql.com/downloads/isitsql.1.0.18.zip.
I’ve been working on a simple monitoring tool for SQL Server over the last few months. I often find myself asking “What just happened?” and “Was it SQL Server?”. I wanted a small utility I could leave running at clients that would answer that question. It needed to be simple enough that a non-SQL Server person could look at it and get an idea if there was an issue with SQL Server. But also sophisticated enough that it would point me in the right direction.
It will capture the following information for each server every minute:
- CPU Usage for SQL Server and non-SQL Server tasks
- Batch requests per second
- Disk Read and Write volume and Page Life Expectancy
- The largest waits grouped together. For example, the PAGEIO* waits are all grouped together as Disk IO waits. You can customize this mapping for your environment.
- It also captures some basic information about the server including the version, restart date and aggregate database size.
When you bring up a server page it will show the currently running queries and the last 60 minutes of the metrics above. When someone reports an issue I can look at the last hour for any server with just two clicks. And quickly answer: “Is It SQL Server?”
I also wanted this to be very simple to use and install. It can run as a simple command line executable. It doesn’t require .NET or Java or even SQL Server. The only requirement is some flavor of ODBC driver for SQL Server which nearly every server has. It hosts a self-contained web server that displays the information you’re looking for. It includes an option to install itself as a service so it runs in the background and is always available. Adding servers is as simple as putting their names into a text file.
All the big servers I monitor already have a fancy monitoring solution. But I’ve never been happy with the solutions for the second tier of servers or the smaller clients that won’t buy a dedicated monitoring solution. So this is what I’ve been using.
You can download this at http://www.scalesql.com/downloads/isitsql.1.0.17.zip. There’s a README document that will help you get started.
I’m pleased to announce that ClearTrace Build 51 is now available for download. If you’re using a previous build (especially build 50) I encourage you to download this version.
This build should load the latest version of the SMO libraries installed on your machine. If you’re using SQL Server 2016, you need the SQL Server engine AND SQL Server Management Studio installed. I believe this is the only way to get the complete SMO library. This also means that when SQL Server 2036 is released it will automatically support it – and hopefully all the releases between now and then.
This version has a new sort named “CPU & Reads”. It sorts based on the rank of each statement for CPU and Reads. If you choose to display the top 15 statements and choose this sort you will get the top 7 of CPU and the top 7 of Reads. Assuming there will be some overlap it will then fill up to 15 with using the highest ranking statements. Or to put it in terms you might better understand, the ORDER BY looks like this:
WHEN [CpuRank] < [ReadsRank] THEN [CpuRank]
END , [CpuRank], [ReadsRank]
I’ve found this brings the most interesting statements to the top.
The build also fixes a small parsing bug for statements that look like this: EXEC sp_executesql @statement=N'SELECT 1'. The combination of EXEC and sp_executesql was making the actual statement disappear. That’s fixed.
If you find any issues, please post them in the ClearTrace Support Forums.
Thanks and Happy Tracing!
We were experiencing trouble installing SQL Server 2014 onto a Windows 2012 R2 VM. These are new production boxes so I was being very careful. The problem manifest itself in two ways:
- A clean installation using our service account got almost all the way through and then failed with the Unauthorized Operation message. It was painful to get out of the installer. I would just hang on a cancel and then eventually exit.
- Installing using Local System worked but then we couldn’t switch to use our service account. We received the same error. This error included an 0x80070005 error code.
I have a number of other SQL Server 2014 installations on Windows 2012 R2 so this surprised me. The detailed error log included this:
Slp: Sco: Attempting to open service handle for service MSSQLSERVER
Slp: Prompting user if they want to retry this action due to the following failure:
Slp: The following is an exception stack listing the exceptions in outermost to innermost order
Slp: Inner exceptions are being indented
Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException
Slp: Attempted to perform an unauthorized operation.
Slp: HResult : 0x84bb0001
Slp: FacilityCode : 1211 (4bb)
Slp: ErrorCode : 1 (0001)
Slp: DisableRetry = true
Slp: Inner exception type: System.UnauthorizedAccessException
Slp: Attempted to perform an unauthorized operation.
Slp: HResult : 0x80070005
Slp: at Microsoft.SqlServer.Configuration.Sco.Service.StartService(String startParams)
Slp: User has chosen to retry this action
A search through the Internet revealed a number of possibilities. I tried to run the installation as Administrator. I wasn’t very hopeful on that since the first thing the installer does is pop up the UAC prompt. Someone also suggested disabling the UAC functionality. I didn’t try that. We also copied the installation media of the mounted DVD into a regular directory. That didn’t help either.
We finally discovered that the team that preps the VMs was now removing both “ABC\Domain Users” and “Authenticated Users” from the local “Users” group. And that was the issue. Putting the service account back in that group fixed the issue.
It's that time again: T-SQL Tuesday! This month the lovely and amazing Mickey Stuewe (b | t) is hosting and asks us to write about the trials and tribulations of data modeling.
Below is a tale that occurs somewhat frequently (in my humble experience anyway), I’ve changed some of the details so as not to embarrass anyone besides myself. Any resemblance to actual data models, living or dead, is really unfortunate.
(Yes, the excessive code is deliberate)
I really love ice cream!
I decided to open my own ice cream parlor and design the database I'd need to support it:
CREATE TABLE IceCream(IceCreamID int NOT NULL IDENTITY(1,1) PRIMARY KEY, IceCreamName nvarchar(255) NOT NULL);
Naturally we need to support some accessories:
CREATE TABLE IceCreamToppings(IceCreamToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, IceCreamToppingName nvarchar(255) NOT NULL);
And of course we need data:
INSERT IceCream(IceCreamName) VALUES ('vanilla'),('chocolate'),('strawberry'),('Neopolitano'),('pistachio'),('mint chocolate chip');
INSERT IceCreamToppings(IceCreamToppingName) VALUES ('chocolate sprinkles'),('candy sprinkles'),('chocolate chips'),
('blueberry sauce'),('whipped cream'),('butterscotch'),('m&ms'),('oreo crumbles');
Business took off into the stratosphere!
My customers also love putting ice cream on breakfast foods, so I offered pancakes and waffles!
CREATE TABLE Pancakes(PancakeID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PancakeName nvarchar(255) NOT NULL);
CREATE TABLE PancakeToppings(PancakeToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PancakeToppingName nvarchar(255) NOT NULL);
CREATE TABLE Waffles(WaffleID int NOT NULL IDENTITY(1,1) PRIMARY KEY, WaffleName nvarchar(255) NOT NULL);
CREATE TABLE WaffleToppings(WaffleToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, WaffleToppingName nvarchar(255) NOT NULL);
INSERT Pancakes(PancakeName) VALUES ('Homemade'),('frozen');
INSERT Waffles(WaffleName) VALUES ('Belgian'),('Eggo round'),('Eggo Square');
INSERT PancakeToppings(PancakeToppingName) VALUES ('chocolate chips'),('blueberries'),('blueberry sauce'),('maple syrup'),('apples'),('cinnamon sugar'),('powdered sugar');
INSERT WaffleToppings(WaffleToppingName) VALUES ('chocolate chips'),('blueberries'),('blueberry sauce'),('maple syrup'),('apples'),('cinnamon sugar'),('powdered sugar');
They loved this so much I started offering other breakfast options:
CREATE TABLE Omelettes(OmeletteID int NOT NULL IDENTITY(1,1) PRIMARY KEY, OmeletteName nvarchar(255) NOT NULL);
CREATE TABLE OmeletteStuffings(OmeletteStuffingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, OmeletteStuffingName nvarchar(255) NOT NULL);
INSERT Omelettes(OmeletteName) VALUES ('whole eggs'),('egg whites');
INSERT OmeletteStuffings(OmeletteStuffingName) VALUES ('sausage'),('mushrooms'),('green peppers'),('ham'),('spinach'),('feta cheese'),('onion'),('red peppers');
And then lunch offerings soon followed:
CREATE TABLE Pizza(PizzaID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PizzaName nvarchar(255) NOT NULL);
CREATE TABLE PizzaToppings(PizzaToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PizzaToppingName nvarchar(255) NOT NULL);
CREATE TABLE Burgers(BurgerID int NOT NULL IDENTITY(1,1) PRIMARY KEY, BurgerName nvarchar(255) NOT NULL);
CREATE TABLE BurgerToppings(BurgerToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, BurgerToppingName nvarchar(255) NOT NULL);
CREATE TABLE Sandwiches(SandwichID int NOT NULL IDENTITY(1,1) PRIMARY KEY, SandwichName nvarchar(255) NOT NULL);
CREATE TABLE SandwichToppings(SandwichToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, SandwichToppingName nvarchar(255) NOT NULL);
INSERT Pizza(PizzaName) VALUES ('deep dish'),('sicilian'),('regular');
INSERT PizzaToppings(PizzaToppingName) VALUES ('pepperoni'),('sausage'),('mushrooms'),('green peppers'),('ham'),('spinach'),('feta cheese'),('onion'),('pineapple');
INSERT Burgers(BurgerName) VALUES ('beef'),('kobe'),('angus'),('turkey'),('chicken');
INSERT BurgerToppings(BurgerToppingName) VALUES ('bacon'),('mayo'),('mustard'),('lettuce'),('tomato'),('red onion'),('white onion'),('sauteed onion'),('mushrooms'),('sauteed mushrooms'),('pickles');
INSERT Sandwiches(SandwichName) VALUES ('roast beef'),('ham'),('turkey'),('pastrami'),('baloney');
INSERT SandwichToppings(SandwichToppingName) VALUES ('bacon'),('mayo'),('mustard'),('lettuce'),('tomato'),('red onion'),('white onion'),('sauteed onion'),('mushrooms'),('sauteed mushrooms'),('pickles');
And then, my business went international!
People wanted my food, but didn't speak English, so I offered translations for everything:
CREATE TABLE IceCreamToppingsTranslation(LanguageID int NOT NULL, IceCreamToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE WaffleToppingsTranslation(LanguageID int NOT NULL, WaffleToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE PancakeToppingsTranslation(LanguageID int NOT NULL, PancakeToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE PizzaToppingsTranslation(LanguageID int NOT NULL, PizzaToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE BurgerToppingsTranslation(LanguageID int NOT NULL, BurgerToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
-- repeat ad nauseum
At some point, I was too busy serving food to design databases anymore, and couldn't deal with the performance issues that were cropping up, so I hired ace database designer Portia N. Carrera, and she promptly replied:
"This design is ridiculous! And it will only get worse as time goes on! No one in their right mind
would ever do such a thing!" (edited to remove excessive but totally justified profanity)
Well, they would. They did. **
"Bull----! How did it get like this?"
Well, that's kinda the thing. I started small, and added stuff. It's easy and natural, and fairly sensible as long as things stay small. Unfortunately as things grew I never went back to look at what's there and question whether this is the "right" way or if there's a better way. This became technical debt and the effort needed to change it was insurmountable.
"Here's a good take on technical debt:" https://medium.com/@MostlyHarmlessD/on-technical-debt-7bac65edf349
It describes the reasoning/mentality around these decisions, with much of it focused on time constraints. Portia calls this "5 minute/5 second design" and curses everyone's lack of foresight (when she's in a bad mood). ***
Well Portia, how would you fix it?
CREATE TABLE FoodTypes(FoodTypeID int NOT NULL PRIMARY KEY, FoodType nvarchar(64) NOT NULL);
INSERT FoodTypes(FoodType) VALUES(N'Ice Cream'),(N'Pancakes'),(N'Waffles'),(N'Omelettes'),(N'Pizza'),(N'Burgers'),(N'Sandwiches');
CREATE TABLE Foods(FoodID int NOT NULL PRIMARY KEY, FoodTypeID int NOT NULL, Food nvarchar(64) NOT NULL);
VALUES(1,N'Vanilla'),(1,N'Chocolate'),(1,N'Strawberry') -- Ice cream
,(2,N'Homemade'),(2,N'Frozen') -- Pancakes
,(3,N'Belgian'),(3,N'Eggo Round'),(3,N'Eggo Square') -- Waffles
,(4,N'Whole Eggs'),(4,N'Egg Whites') -- Omelettes
,(5,N'Deep Dish'),(5,N'Sicilian'),(5,N'Regular'); -- Pizza
CREATE TABLE FoodExtras(FoodExtraID int NOT NULL PRIMARY KEY, FoodTypeID int NOT NULL, FoodExtraName nvarchar(64) NOT NULL);
VALUES(1,N'Chocolate sprinkles'),(1,N'Candy sprinkles'),(1,N'Chocolate chips'),(1,N'Blueberry sauce') -- Ice cream toppings
,(2,N'Chocolate chips'),(2,N'Blueberries'),(2,N'Blueberry sauce'),(2,N'Maple syrup') -- Pancake toppings
,(3,N'Chocolate chips'),(3,N'Blueberries'),(3,N'Blueberry sauce'),(3,N'Maple syrup') -- Waffle toppings
,(4,N'Sausage'),(4,N'Mushrooms'),(4,N'Green Peppers'),(4,N'Ham') -- Omelette stuffings
,(5,N'Pepperoni'),(5,N'Sausage'),(5,N'Mushrooms'),(5,N'Green Peppers'),(5,N'Ham'); -- Pizza toppings
CREATE TABLE FoodExtraTranslations(FoodExtraID int NOT NULL, LanguageID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
VALUES(1,1031,N'German Chocolate Sprinkles'),(1,1036,N'French Chocolate Sprinkles')
,(2,1031,N'German Candy Sprinkles'),(1,1036,N'French Candy Sprinkles')
,(13,1031,N'Sausage auf Deutsch'),(13,1036,N'Sausage au français');
"If this looks suspiciously like the dreaded Entity-Attribute-Value model, that's because it is.
There are definite disadvantages to EAV, but for this particular case it's a reasonable option to support
this data. The biggest advantage is that no new structure is required; your data becomes row-based rather than
table-based. If you get a new type of food, you just add a row to the FoodTypes table, then the associated rows
for extras and their translations. You'll probably wonder why Food and FoodExtras are separate, and they really
don't have to be. I'll leave that redesign for you to figure out, you're not paying me enough for that."
But I can't rewrite all my application code! They're expecting tables named Waffles, Pancakes, etc. and it's too much work to change it!
"Oh give me a ------- break! Just create views with the same name that only SELECT those rows:"
CREATE VIEW Pancakes(PancakeID,PancakeName) AS SELECT FoodID, Food FROM Foods WHERE FoodTypeID=2 WITH CHECK OPTION; -- Pancakes
CREATE VIEW Waffles(WaffleID,WaffleName) AS SELECT FoodID, Food FROM Foods WHERE FoodTypeID=3 WITH CHECK OPTION; -- Waffles
CREATE VIEW Pizza(PizzaID,PizzaName) AS SELECT FoodID, Food FROM Foods WHERE FoodTypeID=5 WITH CHECK OPTION; -- Pizza
"And if you're doing things correctly, you're calling a stored procedure to do all your INSERT/UPDATE/DELETE operations,
those can be modified easily:"
CREATE PROCEDURE AddFood @foodType int, @foodName nvarchar(64) AS
INSERT Foods(FoodTypeID,Food) VALUES(@foodType,@foodName);
ALTER PROCEDURE AddPizza @pizzaName nvarchar(64) AS
EXEC AddFood 5, @pizzaName;
-- original AddPizza code commented out now...
"BTW, you shouldn't hard-code your IDs either, but I'll let you fix that. And if you're doing it with ad-hoc SQL,
or an ORM of some kind, you can still use the views if you add an INSTEAD OF trigger to handle INSERTs and UPDATEs. You're
not paying me enough to do that either, but it's easy enough to write a code generator to do all that."
If you can’t make or take the time to design them up front, take time every few months to review your table designs. If you see a lot of similar or identical tables, look to consolidate them. See if a row-based approach can be done. Also make sure to test performance, sometimes it may not improve with a new design.
But more importantly, TAKE MORE THAN 5 MINUTES to consider your database designs! It's too easy to "get it done right now" and leave it to someone else to fix later, if ever. Technical debt compounds faster than credit card debt.
** This is based on a model I've seen in my career, with about 130 "types" of things that all have an associated translation table attached.
*** And if you think that technical debt is unavoidable, or not that bad, consider working at Facebook: http://www.darkcoding.net/software/facebooks-code-quality-problem/
When last I blogged, I recommended everyone who was eligible to sign up for PASS Summit Speaker Idol 2015. The contestants have been announced: http://www.dcac.co/your-pass-speaker-idol-2015-top-12-are
They also announced the judges in case you want to try the bribery route: http://www.dcac.co/your-pass-speaker-idol-2015-judges
To help this year's participants I will be blogging my notes from when I presented for Speaker Idol 2014, and I'll also add some specifics about my experience in a separate post. I also recommend that you read up on the previous contest and some follow up from a few of the judges:
The notes that follow are transcribed from handwritten notes. I'm listing anonymized comments from the judges, primarily because I haven't gotten their permission, but also to avoid bias on either theirs or the contestant's side. In no particular order (red were items to improve, green were items that were well done, my personal comments in parentheses):
- Need a story about how (your presentation topic) saved you time or improved your job
- Mentioned great benefits (of presentation topic) without providing evidence or comparison
- Too much information for a 5 minute presentation
- Don't condense a 1 hour presentation to 5 minutes
- The faster you need to go, the slower you should talk (don't rush through explanation)
- Voice cut out when turning head away from microphone, turn whole body instead
- Test demos, make sure they don't fail
- Use scripts only, DO NOT TYPE CODE (typing is not the same as a demo)
- No bullets on slide template (plain formatting)
- Too many bullet points on slides
- Avoid using periods at end of bullet points (they aren't sentences)
- Small graphics on slides
- Good to paste code on slides to avoid needing SSMS, also good font and color for code
- Great time management with 23 seconds left (should finish between 4:30 and 5:00 minutes)
- Not sure what topic was about, need to make clear in beginning; not much meat in first 2 minutes
- Good to engage audience at beginning of 5 minute presentation ("ballsy" according to 1 judge, but not generally recommended)
- If asking question, be sure to answer/respond, repeat question for audio/audience, and repeat show of hands to audience
- Make sure images on your slides have proper attribution (author/source and license type), also for quotations/citations (several contestants were missing these)
- Good recovery from technical problems (projector, mouse/remote, audio, demo issues; 3 speakers were commended for this)
- Presentation felt like "Books Online"
- Presented on floor rather than podium
- ZoomIT use mostly poor (only 2 speakers had favorable comments, 3 had negative)
- Never really explained/defined the topic, rushed to demo after long intro, could have explained better within time limit
- Font and background color on slides hard to read (it was noted that slides were templates and presenter may not be able to change)
- Allow enough time for audience to read slides
- Don't repeat slides verbatim
- Good eye contact and gestures (not too big, not too small), good energy and enthusiasm
- Make sure to move with purpose, avoid moving without purpose
- Careful of posture and foot placement, don't rock on feet, don't lean against objects; don't put hands in pockets
- Presentation agenda was disjointed (should follow logical flow)
- Too much code on one slide, hard to read
- Wait for answers to questions (asked audience question and moved on too quickly)
- Laser pointer discipline, do not move it excessively, use as little as possible (more comments from me in future post)
- Engage audience before presentation if possible (say hello, ask questions)
- Inconsistent capitalization on slides
- Set up Top 5/Countdown but didn't emphasize during presentation
- Use SmartArt in Powerpoint for arrows and symbols on slides, rather than raster graphics
- Do not grip lectern, do not hold hands behind back, keep hands loose, use gestures, don't stand in one place
- Include quotations in quotation marks (make sure to include attribution)
- Do not wear anything that covers your face or a hat can cast shadows
- If you do an "About Me" slide, keep it short, include at end and have some contact info (email, Twitter)
- For a 5 minute presentation, don't have as much breadth, go into more depth on fewer items/topics (e.g. mention 3 utilities rather than 7 utilities)
- Including cost/benefit analysis was good touch
- Do not look at slides on projection screen, maintain eye contact with audience (know your slide sequence)
- Spell out term before using its acronym (TDE, PLE, etc.)
- Check text visibility especially when using predetermined slide template
- Do not repeat slides verbatim
- Be sure of technical details/facts (one slide had incorrect information) and level (including regedit was too deep for 5 mins)
- Command prompt font and background colors (default B&W not visible)
- Improved timing for several contestants
- Improved body movement & gestures
- Zoomit still needs work for some folks
- Still some ums/uhs
- Improved slides (fewer words, fewer bullets)
Here's feedback I received during my first presentation:
- Ran out of time, had several slides of material I didn't cover
- Um, uh in speech
- Stood in one spot
- Needs more humor
- Stood in front of PA speaker, caused muddy audio
- Wore microphone on lanyard, picked up fabric/jingly sounds
- Looked back at slides on projection screen too often
I don't have notes on my specific feedback during the final round, except that my timing improved dramatically (due to rehearsal). I'll go into more detail on that in my other post.
I hope this helps all the contestants for 2015, and good luck to you all!
In 2014, PASS offered a new type of session at their Global Summit event: Speaker Idol
Initiated by Denny Cherry (b
), the concept is:
- To provide an opportunity for community speakers
- Who haven't yet presented a regular session at PASS Summit
- A chance to present to a panel of judges - like American Idol, but without singing. (Sadly. Or not.)
- For a guaranteed regular session at the next PASS Summit!
This is a FANTASTIC idea, and I was fortunate enough to participate that year. And I encourage EVERYONE who is eligible to participate this year and every year they can.
- You get a chance to present at Summit, even if you don't get to the final round
- You are getting free feedback from PASS Summit speakers
- Who know exactly how you feel up on the stage
- Who WANT to see you deliver a full session next year
You can find out more about the 2015 Speaker Idol here:
There is absolutely no downside.
Even if you don't get chosen, I recommend that you attend as many Speaker Idol sessions as you can. The feedback from the judges will help you become a better speaker, and would cost you hundreds of dollars from a professional speaking coach (and may not be as useful since they're probably not a PASS speaker).
What are you waiting for? You can enter using this form:
STOP READING THIS BLOG AND ENTER NOW!
You can download the latest version of ClearTrace from my consulting site: scaleSQL Consulting.
I’ve decided to stop using the year as part of the version. I did that to match SQL Server branding but it doesn’t work for ClearTrace. The current build supports SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, and SQL Server 2014. It tries to load the most recent SMO and goes backwards until it successfully loads an SMO library. Always use the most recent version of ClearTrace and it should support any current SQL Server.
If you’re having issues with ClearTrace you can visit the new ClearTrace Support Forums. That’s the best place to ask questions or report bugs.
The latest build adds a number of features. The first is an option that doesn’t reprocess trace files. I regularly generate large traces that have many files and I’m too impatient to wait for them all to finish. The script I use generates a unique name for each trace file. This option allows me to skip the trace files that have already been loaded.
I also fixed a pseudo-memory leak that had been hanging around for a LONG time. One of my early uses of ClearTrace was for a client that had lots of prepared SQL. In the trace file I would find lots of sp_execute 32423445 or some such. That number identifies the prepared SQL statement. ClearTrace captures the statement that prepares that SQL and reports back the actual SQL executed. This was all being stored in memory. Either I wasn’t clearing it properly or I didn’t understand how it got cleared but this structure was ballooning up. I’d regularly fill up memory to just under 2GB and watch the application crash. Now the application clears this structure after each trace file. If you want better handling of prepared SQL just increase the trace file size. That will capture more of the prepared SQL statements.
Trace Groups allow you to group traces together. They are especially helpful to view changes over time. I typically have one trace group per server. I’ve seen users that have dozens and dozens of trace groups with all kinds of fancy names. So I made the Trace Group Name drop down wider and hopefully easier to work with.
If you capture statement level traces and you have lots of nested stored procedures you’ll see lots of EXEC statements in your list of statements. If you end up a few layers deep you can see LOTS of EXEC statements. So I added an entry to the Events drop down filter to exclude statements that start with EXEC. All it does is eliminate anything that starts with “EXEC”. So far it’s been helpful.
I also fixed the history tab to work better with statements. And by “better” I mean it actually works now. You should get a statements history without the application blowing up.
(And if you downloaded build 46 please update to build 47. It’s just minor bug fixes around how statements are displayed in the grid. Or not displayed as was happening in some situations.)
And that’s it. Give it a try and let me know any issues in the ClearTrace forums.
The new feature is called Global Replacement Texts.
It gives you the ability to add custom replacement texts to your scripts in four different features.
Licensing now gives you the ability to upgrade your existing license from the previous major version.
This upgrade path is around 30% cheaper than buying a completely new license.
SQL Editor Plus has been extended with 2 new functionalities:
- Rename Window Tab by right clicking on your window's tab.
- Data Destruction Protector now protects against accidental DROP TABLE.
SQL History has a bug fixed that it didn't save on every interval when using Save to Database functionality.
Search Database Data now has column type choice like search through all textual or numeric columns, etc.
Other Minor Features have been improved like:
- Easier distribution of pre-configured add-in settings to new users with Default Settings Deploy.
- Full compatibility with built-in accessibility color schemes for visually impaired.
- Increased stability.
Hope you enjoy it.
I’ve seen this error off and on in our environment for a while. The full error looks like this:
Login failed for user DOMAIN\login’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: X.X.X.X]
Error: 18456, Severity: 14, State: 11.
Aaron Bertrand wrote the post I always start with on troubleshooting 18456 errors. I’ve always struggled tracking down the root cause of state 11 though. I’ve seen it intermittently occur and then clean itself up. One of the reported causes is difficultly connecting to a domain controller so that kind of fits.
But this time it was occurring and not going away. People have suggested deleting and recreating the login, running as Administrator, disabling UAC, etc.
In our case, the login had been given permission through a Windows Group. That group had been removed, thus the user no longer had permission. But rather than give a standard login failure, it gave the state 11 error. That left me heading down a rabbit hole until xp_logininfo cleared up that they just didn’t have access. My only guess is that because they’d logged in before there was some artifact left on the system and the result was a slightly different login failure error message.
I’m hoping this might help the next person tracking down the dreaded state 11.
For the past few days, I've been working on a stored procedure that was the top offender for CPU usage. I tried various index changes, played around with index hints, etc. Everything I tried either made very little improvement or no improvement.
I was ready to throw in the towel, but then I decided to give the code one last look. I don't have the business knowledge for this particular system, so I had been avoiding the complex stored procedure that has over 200 lines of code. One of the queries has 8 joins, and 2 of those are to views that also contain joins. My head about exploded when I first looked at it.
But then I took a step back and narrowed down where the performance problem was. It was a view that was using SUM, MIN, MAX and ROW_NUMBER functions. It was performing those functions on a large data set, and then the stored procedure filtered it using the input parameters.
My code change was to remove the functions from the view and instead calculate those in the stored procedure.
This is a very simple example:
CREATE VIEW View1
SUM(Column3) OVER(PARTITION BY Column6, Column7) AS SumColumn3,
ROW_NUMBER() OVER(PARTITION BY Column6, Column7) AS ViewRowNumber,
MIN(Column4) OVER(PARTITION BY Column8) AS MinColumn4,
MAX(Column4) OVER(PARTITION BY Column8) AS MaxColumn4
WHERE Column5 = 'SomeValue' AND Column9 = 0
CREATE PROC Proc1 (@var1 varchar(10), @var2 varchar(10))
SELECT SumColumn3, ViewRowNumber, MinColumn4, MaxColumn4
WHERE Column1 = @var1 AND Column2 = @var2
CREATE VIEW View1
SELECT Column3, Column4, Column6, Column7, Column8
WHERE Column5 = 'SomeValue' AND Column9 = 0
CREATE PROC Proc1 (@var1 varchar(10), @var2 varchar(10))
SUM(Column3) OVER(PARTITION BY Column6, Column7) AS SumColumn3,
ROW_NUMBER() OVER(PARTITION BY Column6, Column7) AS ViewRowNumber,
MIN(Column4) OVER(PARTITION BY Column8) AS MinColumn4,
MAX(Column4) OVER(PARTITION BY Column8) AS MaxColumn4
WHERE Column1 = @var1 AND Column2 = @var2
Here are my actual results:
SQL Saturday #300 is coming up right here in Kansas City on September 13th, 2014. This is our fifth SQL Saturday which means it's the fifth anniversary of our now infamous BBQ Crawl. We get together on Friday afternoon before the event and visit a few local joints. We've done nice places and we've done dives. We haven’t picked the venues yet but I promise you’ll be well fed!
And if you’re thinking about the BBQ crawl you should think about submitting a session. Our call for speakers closes Tuesday, July 15th so you just have time!
If you’re going to be at the event, contact me and I’ll get you added to the list.
With version 3.0 the SSMS 2014 is fully supported.
Since this is a new major version you'll eventually need a new license. Please check the EULA to see when.
As a thank you for your patience with this release, everyone that bought
the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade.
You won't have to do anything for this to take effect.
First thing you'll notice is that the UI has been completely changed.
It's more in line with SSMS and looks less web-like.
Also the core has been updated and rewritten in some places to be better suited for future features.
Major improvements for this release are:
Window Connection Coloring
Something a lot of people have asked me over the last 2 years is if there's a way
to color the tab of the window itself. I'm very glad to say that now it is.
In SSMS 2012 and higher the actual query window tab is also colored at the top border
with the same color as the already existing strip making it much easier to see to which
server your query window is connected to even when a window is not focused.
To make it even better, you can not also specify the desired color based on the database name
and not just the server name. This makes is useful for production environments where you
need to be careful in which database you run your queries in.
The format SQL core was rewritten so it'll be easier to improve it in future versions.
New improvement is the ability to terminate SQL statements with semicolons.
This is available only in SSMS 2012 and up.
Execution Plan Analyzer
A big request was to implement the Problems and Solutions tooltip as a window
that you can copy the text from. This is now available. You can move the window around
and copy text from it. It's a small improvement but better stuff will come.
Current Window History has been improved with faster search and now also shows
the color of the server/database it was ran against. This is very helpful if you change
your connection in the same query window making it clear which server/database you ran query on.
The option to Force Save the history has been added. This is a menu item that flushes
the execution and tab content history save buffers to disk.
Added an option to generate snippet from selected SQL text on right click menu.
Run script on multiple databases
Configurable database groups that you can save and reuse were added.
You can create groups of preselected databases to choose from for each server.
This makes repetitive tasks much easier
New small team licensing option
A lot of requests came in for 1 computer, Unlimited VMs option so now it's here.
Hope it serves you well.
Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t).
This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am!
Nah, not really. They're absolutely right, this one is gonna be fun!
I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers. I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it!
One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?)
My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server.
So here's a standard template for a logon trigger to address those pesky sa users:
CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER
WITH ENCRYPTION, EXECUTE AS N'sa'
AFTER LOGON AS
IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN;
-- interesting stuff goes here
What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired). That's a good use for logon triggers, but really not tricky enough for this blog. Some of my suggestions are below:
WAITFOR DELAY '23:59:59';
EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'
EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3;
EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME;
EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;';
EXEC msdb.dbo.sp_start_job @job_name=N'`';
Really, I don't want to spoil your own exploration, try it yourself! The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!". Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up)
Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that:
CREATE FUNCTION dbo.SA_LOGIN_PRIORITY()
WITH SCHEMABINDING, ENCRYPTION AS
WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%'
ELSE N'default' END
CREATE RESOURCE POOL SA_LOGIN_PRIORITY
MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1
,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0)
,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1
-- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014
CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY
( IMPORTANCE = LOW
,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1
,REQUEST_MAX_CPU_TIME_SEC = 1
,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1
,MAX_DOP = 1
,GROUP_MAX_REQUESTS = 1 )
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY);
ALTER RESOURCE GOVERNOR RECONFIGURE;
From top to bottom:
- Create a classifier function to determine which pool the session should go to. More info on classifier functions.
- Create the pool and provide a generous helping of resources for the sa login.
- Create the workload group and further prioritize those resources for the sa login.
- Apply the classifier function and reconfigure RG to use it.
I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages. I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included! You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group. You can add your own overrides to the classifier function if needed.
Some interesting ideas I didn't have time for but expect you to get to before me:
- Set up different pools/workgroups with different settings and randomize which one the classifier chooses
- Do the same but base it on time of day (Books Online example covers this)...
- Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you.
And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection:
ALTER RESOURCE GOVERNOR DISABLE;
That will let you go in and either fix (or drop) the pools, workgroups and classifier function.
So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit!
Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here. Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered.
Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server. These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly. SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only. And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them.
Fortunately there are Event Notifications!
Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized. They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things.
So here's the setup:
CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act
WITH ENCRYPTION AS
DECLARE @x XML, @message nvarchar(max);
RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q;
IF @x.value('(//LoginName)','sysname')=N'sa' AND @x.value('(//ApplicationName)','sysname') NOT LIKE N'SQL Agent%'
-- interesting activation procedure stuff goes here
CREATE QUEUE SA_LOGIN_PRIORITY_q
WITH STATUS=ON, RETENTION=OFF,
ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER);
CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN
TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database'
From top to bottom:
- Create activation procedure for event notification queue.
- Create queue to accept messages from event notification, and activate the procedure to process those messages when received.
- Create service to send messages to that queue.
- Create event notification on AUDIT_LOGIN events that fire the service.
I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped.
So what to put in place for "interesting activation procedure code"? Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message:
SET @email@example.com('(//HostName)','sysname') +
N' tried to log in to server ' + @x.value('(//ServerName)','sysname') +
N' as SA at ' + @x.value('(//StartTime)','sysname') +
N' using the ' + @x.value('(//ApplicationName)','sysname') +
N' program. That''s why you''re getting this message and the attached pornography which' +
N' is bloating your inbox and violating company policy, among other things. If you know' +
N' this person you can go to their desk and hit them, or use the following SQL to end their session:
KILL ' + @x.value('(//SPID)','sysname') +
Hopefully they''re in the middle of a huge query that they need to finish right away.'
EXEC msdb.dbo.sp_send_dbmail @recipients=N'CompanyWideDistroList@yourcompany.com', @subject=N'SA Login Alert', @query_result_width=32767,
@body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg'
I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it. The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything!
Except KILL the SPID, or SHUTDOWN the server directly. I'm still working on those.
As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets.
The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 184.108.40.206
This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files.
The first parameter is the name of the target PDF file that will be created.
The second parameter is the directory containing PDF files to be merged into a single file.
static void Main(string args)
if (args.Length == 0 || args == "-h" || args == "/h")
Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 220.127.116.11.");
Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs.");
Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir");
Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension.");
Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs");
Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles");
else if (args.Length == 2)
Console.WriteLine("Press any key to exit...");
static void CreateMergedPDF(string targetPDF, string sourceDir)
using (FileStream stream = new FileStream(targetPDF, FileMode.Create))
Document pdfDoc = new Document(PageSize.A4);
PdfCopy pdf = new PdfCopy(pdfDoc, stream);
var files = Directory.GetFiles(sourceDir);
Console.WriteLine("Merging files count: " + files.Length);
int i = 1;
foreach (string file in files)
Console.WriteLine(i + ". Adding: " + file);
if (pdfDoc != null)
Console.WriteLine("SpeedPASS PDF merge complete.");
Hope it helps you and have fun.