I’m pleased to announce that ClearTrace Build 51 is now available for download. If you’re using a previous build (especially build 50) I encourage you to download this version.
This build should load the latest version of the SMO libraries installed on your machine. If you’re using SQL Server 2016, you need the SQL Server engine AND SQL Server Management Studio installed. I believe this is the only way to get the complete SMO library. This also means that when SQL Server 2036 is released it will automatically support it – and hopefully all the releases between now and then.
This version has a new sort named “CPU & Reads”. It sorts based on the rank of each statement for CPU and Reads. If you choose to display the top 15 statements and choose this sort you will get the top 7 of CPU and the top 7 of Reads. Assuming there will be some overlap it will then fill up to 15 with using the highest ranking statements. Or to put it in terms you might better understand, the ORDER BY looks like this:
WHEN [CpuRank] < [ReadsRank] THEN [CpuRank]
END , [CpuRank], [ReadsRank]
I’ve found this brings the most interesting statements to the top.
The build also fixes a small parsing bug for statements that look like this: EXEC sp_executesql @statement=N'SELECT 1'. The combination of EXEC and sp_executesql was making the actual statement disappear. That’s fixed.
If you find any issues, please post them in the ClearTrace Support Forums.
Thanks and Happy Tracing!
We were experiencing trouble installing SQL Server 2014 onto a Windows 2012 R2 VM. These are new production boxes so I was being very careful. The problem manifest itself in two ways:
- A clean installation using our service account got almost all the way through and then failed with the Unauthorized Operation message. It was painful to get out of the installer. I would just hang on a cancel and then eventually exit.
- Installing using Local System worked but then we couldn’t switch to use our service account. We received the same error. This error included an 0x80070005 error code.
I have a number of other SQL Server 2014 installations on Windows 2012 R2 so this surprised me. The detailed error log included this:
Slp: Sco: Attempting to open service handle for service MSSQLSERVER
Slp: Prompting user if they want to retry this action due to the following failure:
Slp: The following is an exception stack listing the exceptions in outermost to innermost order
Slp: Inner exceptions are being indented
Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException
Slp: Attempted to perform an unauthorized operation.
Slp: HResult : 0x84bb0001
Slp: FacilityCode : 1211 (4bb)
Slp: ErrorCode : 1 (0001)
Slp: DisableRetry = true
Slp: Inner exception type: System.UnauthorizedAccessException
Slp: Attempted to perform an unauthorized operation.
Slp: HResult : 0x80070005
Slp: at Microsoft.SqlServer.Configuration.Sco.Service.StartService(String startParams)
Slp: User has chosen to retry this action
A search through the Internet revealed a number of possibilities. I tried to run the installation as Administrator. I wasn’t very hopeful on that since the first thing the installer does is pop up the UAC prompt. Someone also suggested disabling the UAC functionality. I didn’t try that. We also copied the installation media of the mounted DVD into a regular directory. That didn’t help either.
We finally discovered that the team that preps the VMs was now removing both “ABC\Domain Users” and “Authenticated Users” from the local “Users” group. And that was the issue. Putting the service account back in that group fixed the issue.
It's that time again: T-SQL Tuesday! This month the lovely and amazing Mickey Stuewe (b | t) is hosting and asks us to write about the trials and tribulations of data modeling.
Below is a tale that occurs somewhat frequently (in my humble experience anyway), I’ve changed some of the details so as not to embarrass anyone besides myself. Any resemblance to actual data models, living or dead, is really unfortunate.
(Yes, the excessive code is deliberate)
I really love ice cream!
I decided to open my own ice cream parlor and design the database I'd need to support it:
CREATE TABLE IceCream(IceCreamID int NOT NULL IDENTITY(1,1) PRIMARY KEY, IceCreamName nvarchar(255) NOT NULL);
Naturally we need to support some accessories:
CREATE TABLE IceCreamToppings(IceCreamToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, IceCreamToppingName nvarchar(255) NOT NULL);
And of course we need data:
INSERT IceCream(IceCreamName) VALUES ('vanilla'),('chocolate'),('strawberry'),('Neopolitano'),('pistachio'),('mint chocolate chip');
INSERT IceCreamToppings(IceCreamToppingName) VALUES ('chocolate sprinkles'),('candy sprinkles'),('chocolate chips'),
('blueberry sauce'),('whipped cream'),('butterscotch'),('m&ms'),('oreo crumbles');
Business took off into the stratosphere!
My customers also love putting ice cream on breakfast foods, so I offered pancakes and waffles!
CREATE TABLE Pancakes(PancakeID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PancakeName nvarchar(255) NOT NULL);
CREATE TABLE PancakeToppings(PancakeToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PancakeToppingName nvarchar(255) NOT NULL);
CREATE TABLE Waffles(WaffleID int NOT NULL IDENTITY(1,1) PRIMARY KEY, WaffleName nvarchar(255) NOT NULL);
CREATE TABLE WaffleToppings(WaffleToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, WaffleToppingName nvarchar(255) NOT NULL);
INSERT Pancakes(PancakeName) VALUES ('Homemade'),('frozen');
INSERT Waffles(WaffleName) VALUES ('Belgian'),('Eggo round'),('Eggo Square');
INSERT PancakeToppings(PancakeToppingName) VALUES ('chocolate chips'),('blueberries'),('blueberry sauce'),('maple syrup'),('apples'),('cinnamon sugar'),('powdered sugar');
INSERT WaffleToppings(WaffleToppingName) VALUES ('chocolate chips'),('blueberries'),('blueberry sauce'),('maple syrup'),('apples'),('cinnamon sugar'),('powdered sugar');
They loved this so much I started offering other breakfast options:
CREATE TABLE Omelettes(OmeletteID int NOT NULL IDENTITY(1,1) PRIMARY KEY, OmeletteName nvarchar(255) NOT NULL);
CREATE TABLE OmeletteStuffings(OmeletteStuffingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, OmeletteStuffingName nvarchar(255) NOT NULL);
INSERT Omelettes(OmeletteName) VALUES ('whole eggs'),('egg whites');
INSERT OmeletteStuffings(OmeletteStuffingName) VALUES ('sausage'),('mushrooms'),('green peppers'),('ham'),('spinach'),('feta cheese'),('onion'),('red peppers');
And then lunch offerings soon followed:
CREATE TABLE Pizza(PizzaID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PizzaName nvarchar(255) NOT NULL);
CREATE TABLE PizzaToppings(PizzaToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, PizzaToppingName nvarchar(255) NOT NULL);
CREATE TABLE Burgers(BurgerID int NOT NULL IDENTITY(1,1) PRIMARY KEY, BurgerName nvarchar(255) NOT NULL);
CREATE TABLE BurgerToppings(BurgerToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, BurgerToppingName nvarchar(255) NOT NULL);
CREATE TABLE Sandwiches(SandwichID int NOT NULL IDENTITY(1,1) PRIMARY KEY, SandwichName nvarchar(255) NOT NULL);
CREATE TABLE SandwichToppings(SandwichToppingID int NOT NULL IDENTITY(1,1) PRIMARY KEY, SandwichToppingName nvarchar(255) NOT NULL);
INSERT Pizza(PizzaName) VALUES ('deep dish'),('sicilian'),('regular');
INSERT PizzaToppings(PizzaToppingName) VALUES ('pepperoni'),('sausage'),('mushrooms'),('green peppers'),('ham'),('spinach'),('feta cheese'),('onion'),('pineapple');
INSERT Burgers(BurgerName) VALUES ('beef'),('kobe'),('angus'),('turkey'),('chicken');
INSERT BurgerToppings(BurgerToppingName) VALUES ('bacon'),('mayo'),('mustard'),('lettuce'),('tomato'),('red onion'),('white onion'),('sauteed onion'),('mushrooms'),('sauteed mushrooms'),('pickles');
INSERT Sandwiches(SandwichName) VALUES ('roast beef'),('ham'),('turkey'),('pastrami'),('baloney');
INSERT SandwichToppings(SandwichToppingName) VALUES ('bacon'),('mayo'),('mustard'),('lettuce'),('tomato'),('red onion'),('white onion'),('sauteed onion'),('mushrooms'),('sauteed mushrooms'),('pickles');
And then, my business went international!
People wanted my food, but didn't speak English, so I offered translations for everything:
CREATE TABLE IceCreamToppingsTranslation(LanguageID int NOT NULL, IceCreamToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE WaffleToppingsTranslation(LanguageID int NOT NULL, WaffleToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE PancakeToppingsTranslation(LanguageID int NOT NULL, PancakeToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE PizzaToppingsTranslation(LanguageID int NOT NULL, PizzaToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
CREATE TABLE BurgerToppingsTranslation(LanguageID int NOT NULL, BurgerToppingID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
-- repeat ad nauseum
At some point, I was too busy serving food to design databases anymore, and couldn't deal with the performance issues that were cropping up, so I hired ace database designer Portia N. Carrera, and she promptly replied:
"This design is ridiculous! And it will only get worse as time goes on! No one in their right mind
would ever do such a thing!" (edited to remove excessive but totally justified profanity)
Well, they would. They did. **
"Bull----! How did it get like this?"
Well, that's kinda the thing. I started small, and added stuff. It's easy and natural, and fairly sensible as long as things stay small. Unfortunately as things grew I never went back to look at what's there and question whether this is the "right" way or if there's a better way. This became technical debt and the effort needed to change it was insurmountable.
"Here's a good take on technical debt:" https://medium.com/@MostlyHarmlessD/on-technical-debt-7bac65edf349
It describes the reasoning/mentality around these decisions, with much of it focused on time constraints. Portia calls this "5 minute/5 second design" and curses everyone's lack of foresight (when she's in a bad mood). ***
Well Portia, how would you fix it?
CREATE TABLE FoodTypes(FoodTypeID int NOT NULL PRIMARY KEY, FoodType nvarchar(64) NOT NULL);
INSERT FoodTypes(FoodType) VALUES(N'Ice Cream'),(N'Pancakes'),(N'Waffles'),(N'Omelettes'),(N'Pizza'),(N'Burgers'),(N'Sandwiches');
CREATE TABLE Foods(FoodID int NOT NULL PRIMARY KEY, FoodTypeID int NOT NULL, Food nvarchar(64) NOT NULL);
VALUES(1,N'Vanilla'),(1,N'Chocolate'),(1,N'Strawberry') -- Ice cream
,(2,N'Homemade'),(2,N'Frozen') -- Pancakes
,(3,N'Belgian'),(3,N'Eggo Round'),(3,N'Eggo Square') -- Waffles
,(4,N'Whole Eggs'),(4,N'Egg Whites') -- Omelettes
,(5,N'Deep Dish'),(5,N'Sicilian'),(5,N'Regular'); -- Pizza
CREATE TABLE FoodExtras(FoodExtraID int NOT NULL PRIMARY KEY, FoodTypeID int NOT NULL, FoodExtraName nvarchar(64) NOT NULL);
VALUES(1,N'Chocolate sprinkles'),(1,N'Candy sprinkles'),(1,N'Chocolate chips'),(1,N'Blueberry sauce') -- Ice cream toppings
,(2,N'Chocolate chips'),(2,N'Blueberries'),(2,N'Blueberry sauce'),(2,N'Maple syrup') -- Pancake toppings
,(3,N'Chocolate chips'),(3,N'Blueberries'),(3,N'Blueberry sauce'),(3,N'Maple syrup') -- Waffle toppings
,(4,N'Sausage'),(4,N'Mushrooms'),(4,N'Green Peppers'),(4,N'Ham') -- Omelette stuffings
,(5,N'Pepperoni'),(5,N'Sausage'),(5,N'Mushrooms'),(5,N'Green Peppers'),(5,N'Ham'); -- Pizza toppings
CREATE TABLE FoodExtraTranslations(FoodExtraID int NOT NULL, LanguageID int NOT NULL, TranslatedText nvarchar(4000) NOT NULL);
VALUES(1,1031,N'German Chocolate Sprinkles'),(1,1036,N'French Chocolate Sprinkles')
,(2,1031,N'German Candy Sprinkles'),(1,1036,N'French Candy Sprinkles')
,(13,1031,N'Sausage auf Deutsch'),(13,1036,N'Sausage au français');
"If this looks suspiciously like the dreaded Entity-Attribute-Value model, that's because it is.
There are definite disadvantages to EAV, but for this particular case it's a reasonable option to support
this data. The biggest advantage is that no new structure is required; your data becomes row-based rather than
table-based. If you get a new type of food, you just add a row to the FoodTypes table, then the associated rows
for extras and their translations. You'll probably wonder why Food and FoodExtras are separate, and they really
don't have to be. I'll leave that redesign for you to figure out, you're not paying me enough for that."
But I can't rewrite all my application code! They're expecting tables named Waffles, Pancakes, etc. and it's too much work to change it!
"Oh give me a ------- break! Just create views with the same name that only SELECT those rows:"
CREATE VIEW Pancakes(PancakeID,PancakeName) AS SELECT FoodID, Food FROM Foods WHERE FoodTypeID=2 WITH CHECK OPTION; -- Pancakes
CREATE VIEW Waffles(WaffleID,WaffleName) AS SELECT FoodID, Food FROM Foods WHERE FoodTypeID=3 WITH CHECK OPTION; -- Waffles
CREATE VIEW Pizza(PizzaID,PizzaName) AS SELECT FoodID, Food FROM Foods WHERE FoodTypeID=5 WITH CHECK OPTION; -- Pizza
"And if you're doing things correctly, you're calling a stored procedure to do all your INSERT/UPDATE/DELETE operations,
those can be modified easily:"
CREATE PROCEDURE AddFood @foodType int, @foodName nvarchar(64) AS
INSERT Foods(FoodTypeID,Food) VALUES(@foodType,@foodName);
ALTER PROCEDURE AddPizza @pizzaName nvarchar(64) AS
EXEC AddFood 5, @pizzaName;
-- original AddPizza code commented out now...
"BTW, you shouldn't hard-code your IDs either, but I'll let you fix that. And if you're doing it with ad-hoc SQL,
or an ORM of some kind, you can still use the views if you add an INSTEAD OF trigger to handle INSERTs and UPDATEs. You're
not paying me enough to do that either, but it's easy enough to write a code generator to do all that."
If you can’t make or take the time to design them up front, take time every few months to review your table designs. If you see a lot of similar or identical tables, look to consolidate them. See if a row-based approach can be done. Also make sure to test performance, sometimes it may not improve with a new design.
But more importantly, TAKE MORE THAN 5 MINUTES to consider your database designs! It's too easy to "get it done right now" and leave it to someone else to fix later, if ever. Technical debt compounds faster than credit card debt.
** This is based on a model I've seen in my career, with about 130 "types" of things that all have an associated translation table attached.
*** And if you think that technical debt is unavoidable, or not that bad, consider working at Facebook: http://www.darkcoding.net/software/facebooks-code-quality-problem/
When last I blogged, I recommended everyone who was eligible to sign up for PASS Summit Speaker Idol 2015. The contestants have been announced: http://www.dcac.co/your-pass-speaker-idol-2015-top-12-are
They also announced the judges in case you want to try the bribery route: http://www.dcac.co/your-pass-speaker-idol-2015-judges
To help this year's participants I will be blogging my notes from when I presented for Speaker Idol 2014, and I'll also add some specifics about my experience in a separate post. I also recommend that you read up on the previous contest and some follow up from a few of the judges:
The notes that follow are transcribed from handwritten notes. I'm listing anonymized comments from the judges, primarily because I haven't gotten their permission, but also to avoid bias on either theirs or the contestant's side. In no particular order (red were items to improve, green were items that were well done, my personal comments in parentheses):
- Need a story about how (your presentation topic) saved you time or improved your job
- Mentioned great benefits (of presentation topic) without providing evidence or comparison
- Too much information for a 5 minute presentation
- Don't condense a 1 hour presentation to 5 minutes
- The faster you need to go, the slower you should talk (don't rush through explanation)
- Voice cut out when turning head away from microphone, turn whole body instead
- Test demos, make sure they don't fail
- Use scripts only, DO NOT TYPE CODE (typing is not the same as a demo)
- No bullets on slide template (plain formatting)
- Too many bullet points on slides
- Avoid using periods at end of bullet points (they aren't sentences)
- Small graphics on slides
- Good to paste code on slides to avoid needing SSMS, also good font and color for code
- Great time management with 23 seconds left (should finish between 4:30 and 5:00 minutes)
- Not sure what topic was about, need to make clear in beginning; not much meat in first 2 minutes
- Good to engage audience at beginning of 5 minute presentation ("ballsy" according to 1 judge, but not generally recommended)
- If asking question, be sure to answer/respond, repeat question for audio/audience, and repeat show of hands to audience
- Make sure images on your slides have proper attribution (author/source and license type), also for quotations/citations (several contestants were missing these)
- Good recovery from technical problems (projector, mouse/remote, audio, demo issues; 3 speakers were commended for this)
- Presentation felt like "Books Online"
- Presented on floor rather than podium
- ZoomIT use mostly poor (only 2 speakers had favorable comments, 3 had negative)
- Never really explained/defined the topic, rushed to demo after long intro, could have explained better within time limit
- Font and background color on slides hard to read (it was noted that slides were templates and presenter may not be able to change)
- Allow enough time for audience to read slides
- Don't repeat slides verbatim
- Good eye contact and gestures (not too big, not too small), good energy and enthusiasm
- Make sure to move with purpose, avoid moving without purpose
- Careful of posture and foot placement, don't rock on feet, don't lean against objects; don't put hands in pockets
- Presentation agenda was disjointed (should follow logical flow)
- Too much code on one slide, hard to read
- Wait for answers to questions (asked audience question and moved on too quickly)
- Laser pointer discipline, do not move it excessively, use as little as possible (more comments from me in future post)
- Engage audience before presentation if possible (say hello, ask questions)
- Inconsistent capitalization on slides
- Set up Top 5/Countdown but didn't emphasize during presentation
- Use SmartArt in Powerpoint for arrows and symbols on slides, rather than raster graphics
- Do not grip lectern, do not hold hands behind back, keep hands loose, use gestures, don't stand in one place
- Include quotations in quotation marks (make sure to include attribution)
- Do not wear anything that covers your face or a hat can cast shadows
- If you do an "About Me" slide, keep it short, include at end and have some contact info (email, Twitter)
- For a 5 minute presentation, don't have as much breadth, go into more depth on fewer items/topics (e.g. mention 3 utilities rather than 7 utilities)
- Including cost/benefit analysis was good touch
- Do not look at slides on projection screen, maintain eye contact with audience (know your slide sequence)
- Spell out term before using its acronym (TDE, PLE, etc.)
- Check text visibility especially when using predetermined slide template
- Do not repeat slides verbatim
- Be sure of technical details/facts (one slide had incorrect information) and level (including regedit was too deep for 5 mins)
- Command prompt font and background colors (default B&W not visible)
- Improved timing for several contestants
- Improved body movement & gestures
- Zoomit still needs work for some folks
- Still some ums/uhs
- Improved slides (fewer words, fewer bullets)
Here's feedback I received during my first presentation:
- Ran out of time, had several slides of material I didn't cover
- Um, uh in speech
- Stood in one spot
- Needs more humor
- Stood in front of PA speaker, caused muddy audio
- Wore microphone on lanyard, picked up fabric/jingly sounds
- Looked back at slides on projection screen too often
I don't have notes on my specific feedback during the final round, except that my timing improved dramatically (due to rehearsal). I'll go into more detail on that in my other post.
I hope this helps all the contestants for 2015, and good luck to you all!
In 2014, PASS offered a new type of session at their Global Summit event: Speaker Idol
Initiated by Denny Cherry (b
), the concept is:
- To provide an opportunity for community speakers
- Who haven't yet presented a regular session at PASS Summit
- A chance to present to a panel of judges - like American Idol, but without singing. (Sadly. Or not.)
- For a guaranteed regular session at the next PASS Summit!
This is a FANTASTIC idea, and I was fortunate enough to participate that year. And I encourage EVERYONE who is eligible to participate this year and every year they can.
- You get a chance to present at Summit, even if you don't get to the final round
- You are getting free feedback from PASS Summit speakers
- Who know exactly how you feel up on the stage
- Who WANT to see you deliver a full session next year
You can find out more about the 2015 Speaker Idol here:
There is absolutely no downside.
Even if you don't get chosen, I recommend that you attend as many Speaker Idol sessions as you can. The feedback from the judges will help you become a better speaker, and would cost you hundreds of dollars from a professional speaking coach (and may not be as useful since they're probably not a PASS speaker).
What are you waiting for? You can enter using this form:
STOP READING THIS BLOG AND ENTER NOW!
You can download the latest version of ClearTrace from my consulting site: scaleSQL Consulting.
I’ve decided to stop using the year as part of the version. I did that to match SQL Server branding but it doesn’t work for ClearTrace. The current build supports SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, and SQL Server 2014. It tries to load the most recent SMO and goes backwards until it successfully loads an SMO library. Always use the most recent version of ClearTrace and it should support any current SQL Server.
If you’re having issues with ClearTrace you can visit the new ClearTrace Support Forums. That’s the best place to ask questions or report bugs.
The latest build adds a number of features. The first is an option that doesn’t reprocess trace files. I regularly generate large traces that have many files and I’m too impatient to wait for them all to finish. The script I use generates a unique name for each trace file. This option allows me to skip the trace files that have already been loaded.
I also fixed a pseudo-memory leak that had been hanging around for a LONG time. One of my early uses of ClearTrace was for a client that had lots of prepared SQL. In the trace file I would find lots of sp_execute 32423445 or some such. That number identifies the prepared SQL statement. ClearTrace captures the statement that prepares that SQL and reports back the actual SQL executed. This was all being stored in memory. Either I wasn’t clearing it properly or I didn’t understand how it got cleared but this structure was ballooning up. I’d regularly fill up memory to just under 2GB and watch the application crash. Now the application clears this structure after each trace file. If you want better handling of prepared SQL just increase the trace file size. That will capture more of the prepared SQL statements.
Trace Groups allow you to group traces together. They are especially helpful to view changes over time. I typically have one trace group per server. I’ve seen users that have dozens and dozens of trace groups with all kinds of fancy names. So I made the Trace Group Name drop down wider and hopefully easier to work with.
If you capture statement level traces and you have lots of nested stored procedures you’ll see lots of EXEC statements in your list of statements. If you end up a few layers deep you can see LOTS of EXEC statements. So I added an entry to the Events drop down filter to exclude statements that start with EXEC. All it does is eliminate anything that starts with “EXEC”. So far it’s been helpful.
I also fixed the history tab to work better with statements. And by “better” I mean it actually works now. You should get a statements history without the application blowing up.
(And if you downloaded build 46 please update to build 47. It’s just minor bug fixes around how statements are displayed in the grid. Or not displayed as was happening in some situations.)
And that’s it. Give it a try and let me know any issues in the ClearTrace forums.
The new feature is called Global Replacement Texts.
It gives you the ability to add custom replacement texts to your scripts in four different features.
Licensing now gives you the ability to upgrade your existing license from the previous major version.
This upgrade path is around 30% cheaper than buying a completely new license.
SQL Editor Plus has been extended with 2 new functionalities:
- Rename Window Tab by right clicking on your window's tab.
- Data Destruction Protector now protects against accidental DROP TABLE.
SQL History has a bug fixed that it didn't save on every interval when using Save to Database functionality.
Search Database Data now has column type choice like search through all textual or numeric columns, etc.
Other Minor Features have been improved like:
- Easier distribution of pre-configured add-in settings to new users with Default Settings Deploy.
- Full compatibility with built-in accessibility color schemes for visually impaired.
- Increased stability.
Hope you enjoy it.
I’ve seen this error off and on in our environment for a while. The full error looks like this:
Login failed for user DOMAIN\login’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: X.X.X.X]
Error: 18456, Severity: 14, State: 11.
Aaron Bertrand wrote the post I always start with on troubleshooting 18456 errors. I’ve always struggled tracking down the root cause of state 11 though. I’ve seen it intermittently occur and then clean itself up. One of the reported causes is difficultly connecting to a domain controller so that kind of fits.
But this time it was occurring and not going away. People have suggested deleting and recreating the login, running as Administrator, disabling UAC, etc.
In our case, the login had been given permission through a Windows Group. That group had been removed, thus the user no longer had permission. But rather than give a standard login failure, it gave the state 11 error. That left me heading down a rabbit hole until xp_logininfo cleared up that they just didn’t have access. My only guess is that because they’d logged in before there was some artifact left on the system and the result was a slightly different login failure error message.
I’m hoping this might help the next person tracking down the dreaded state 11.
For the past few days, I've been working on a stored procedure that was the top offender for CPU usage. I tried various index changes, played around with index hints, etc. Everything I tried either made very little improvement or no improvement.
I was ready to throw in the towel, but then I decided to give the code one last look. I don't have the business knowledge for this particular system, so I had been avoiding the complex stored procedure that has over 200 lines of code. One of the queries has 8 joins, and 2 of those are to views that also contain joins. My head about exploded when I first looked at it.
But then I took a step back and narrowed down where the performance problem was. It was a view that was using SUM, MIN, MAX and ROW_NUMBER functions. It was performing those functions on a large data set, and then the stored procedure filtered it using the input parameters.
My code change was to remove the functions from the view and instead calculate those in the stored procedure.
This is a very simple example:
CREATE VIEW View1
SUM(Column3) OVER(PARTITION BY Column6, Column7) AS SumColumn3,
ROW_NUMBER() OVER(PARTITION BY Column6, Column7) AS ViewRowNumber,
MIN(Column4) OVER(PARTITION BY Column8) AS MinColumn4,
MAX(Column4) OVER(PARTITION BY Column8) AS MaxColumn4
WHERE Column5 = 'SomeValue' AND Column9 = 0
CREATE PROC Proc1 (@var1 varchar(10), @var2 varchar(10))
SELECT SumColumn3, ViewRowNumber, MinColumn4, MaxColumn4
WHERE Column1 = @var1 AND Column2 = @var2
CREATE VIEW View1
SELECT Column3, Column4, Column6, Column7, Column8
WHERE Column5 = 'SomeValue' AND Column9 = 0
CREATE PROC Proc1 (@var1 varchar(10), @var2 varchar(10))
SUM(Column3) OVER(PARTITION BY Column6, Column7) AS SumColumn3,
ROW_NUMBER() OVER(PARTITION BY Column6, Column7) AS ViewRowNumber,
MIN(Column4) OVER(PARTITION BY Column8) AS MinColumn4,
MAX(Column4) OVER(PARTITION BY Column8) AS MaxColumn4
WHERE Column1 = @var1 AND Column2 = @var2
Here are my actual results:
SQL Saturday #300 is coming up right here in Kansas City on September 13th, 2014. This is our fifth SQL Saturday which means it's the fifth anniversary of our now infamous BBQ Crawl. We get together on Friday afternoon before the event and visit a few local joints. We've done nice places and we've done dives. We haven’t picked the venues yet but I promise you’ll be well fed!
And if you’re thinking about the BBQ crawl you should think about submitting a session. Our call for speakers closes Tuesday, July 15th so you just have time!
If you’re going to be at the event, contact me and I’ll get you added to the list.
With version 3.0 the SSMS 2014 is fully supported.
Since this is a new major version you'll eventually need a new license. Please check the EULA to see when.
As a thank you for your patience with this release, everyone that bought
the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade.
You won't have to do anything for this to take effect.
First thing you'll notice is that the UI has been completely changed.
It's more in line with SSMS and looks less web-like.
Also the core has been updated and rewritten in some places to be better suited for future features.
Major improvements for this release are:
Window Connection Coloring
Something a lot of people have asked me over the last 2 years is if there's a way
to color the tab of the window itself. I'm very glad to say that now it is.
In SSMS 2012 and higher the actual query window tab is also colored at the top border
with the same color as the already existing strip making it much easier to see to which
server your query window is connected to even when a window is not focused.
To make it even better, you can not also specify the desired color based on the database name
and not just the server name. This makes is useful for production environments where you
need to be careful in which database you run your queries in.
The format SQL core was rewritten so it'll be easier to improve it in future versions.
New improvement is the ability to terminate SQL statements with semicolons.
This is available only in SSMS 2012 and up.
Execution Plan Analyzer
A big request was to implement the Problems and Solutions tooltip as a window
that you can copy the text from. This is now available. You can move the window around
and copy text from it. It's a small improvement but better stuff will come.
Current Window History has been improved with faster search and now also shows
the color of the server/database it was ran against. This is very helpful if you change
your connection in the same query window making it clear which server/database you ran query on.
The option to Force Save the history has been added. This is a menu item that flushes
the execution and tab content history save buffers to disk.
Added an option to generate snippet from selected SQL text on right click menu.
Run script on multiple databases
Configurable database groups that you can save and reuse were added.
You can create groups of preselected databases to choose from for each server.
This makes repetitive tasks much easier
New small team licensing option
A lot of requests came in for 1 computer, Unlimited VMs option so now it's here.
Hope it serves you well.
Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t).
This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am!
Nah, not really. They're absolutely right, this one is gonna be fun!
I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers. I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it!
One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?)
My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server.
So here's a standard template for a logon trigger to address those pesky sa users:
CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER
WITH ENCRYPTION, EXECUTE AS N'sa'
AFTER LOGON AS
IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN;
-- interesting stuff goes here
What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired). That's a good use for logon triggers, but really not tricky enough for this blog. Some of my suggestions are below:
WAITFOR DELAY '23:59:59';
EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'
EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3;
EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME;
EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;';
EXEC msdb.dbo.sp_start_job @job_name=N'`';
Really, I don't want to spoil your own exploration, try it yourself! The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!". Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up)
Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that:
CREATE FUNCTION dbo.SA_LOGIN_PRIORITY()
WITH SCHEMABINDING, ENCRYPTION AS
WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%'
ELSE N'default' END
CREATE RESOURCE POOL SA_LOGIN_PRIORITY
MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1
,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0)
,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1
-- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014
CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY
( IMPORTANCE = LOW
,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1
,REQUEST_MAX_CPU_TIME_SEC = 1
,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1
,MAX_DOP = 1
,GROUP_MAX_REQUESTS = 1 )
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY);
ALTER RESOURCE GOVERNOR RECONFIGURE;
From top to bottom:
- Create a classifier function to determine which pool the session should go to. More info on classifier functions.
- Create the pool and provide a generous helping of resources for the sa login.
- Create the workload group and further prioritize those resources for the sa login.
- Apply the classifier function and reconfigure RG to use it.
I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages. I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included! You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group. You can add your own overrides to the classifier function if needed.
Some interesting ideas I didn't have time for but expect you to get to before me:
- Set up different pools/workgroups with different settings and randomize which one the classifier chooses
- Do the same but base it on time of day (Books Online example covers this)...
- Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you.
And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection:
ALTER RESOURCE GOVERNOR DISABLE;
That will let you go in and either fix (or drop) the pools, workgroups and classifier function.
So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit!
Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here. Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered.
Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server. These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly. SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only. And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them.
Fortunately there are Event Notifications!
Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized. They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things.
So here's the setup:
CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act
WITH ENCRYPTION AS
DECLARE @x XML, @message nvarchar(max);
RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q;
IF @x.value('(//LoginName)','sysname')=N'sa' AND @x.value('(//ApplicationName)','sysname') NOT LIKE N'SQL Agent%'
-- interesting activation procedure stuff goes here
CREATE QUEUE SA_LOGIN_PRIORITY_q
WITH STATUS=ON, RETENTION=OFF,
ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER);
CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN
TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database'
From top to bottom:
- Create activation procedure for event notification queue.
- Create queue to accept messages from event notification, and activate the procedure to process those messages when received.
- Create service to send messages to that queue.
- Create event notification on AUDIT_LOGIN events that fire the service.
I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped.
So what to put in place for "interesting activation procedure code"? Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message:
SET @firstname.lastname@example.org('(//HostName)','sysname') +
N' tried to log in to server ' + @x.value('(//ServerName)','sysname') +
N' as SA at ' + @x.value('(//StartTime)','sysname') +
N' using the ' + @x.value('(//ApplicationName)','sysname') +
N' program. That''s why you''re getting this message and the attached pornography which' +
N' is bloating your inbox and violating company policy, among other things. If you know' +
N' this person you can go to their desk and hit them, or use the following SQL to end their session:
KILL ' + @x.value('(//SPID)','sysname') +
Hopefully they''re in the middle of a huge query that they need to finish right away.'
EXEC msdb.dbo.sp_send_dbmail @recipients=N'CompanyWideDistroList@yourcompany.com', @subject=N'SA Login Alert', @query_result_width=32767,
@body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg'
I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it. The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything!
Except KILL the SPID, or SHUTDOWN the server directly. I'm still working on those.
As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets.
The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 126.96.36.199
This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files.
The first parameter is the name of the target PDF file that will be created.
The second parameter is the directory containing PDF files to be merged into a single file.
static void Main(string args)
if (args.Length == 0 || args == "-h" || args == "/h")
Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 188.8.131.52.");
Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs.");
Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir");
Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension.");
Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs");
Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles");
else if (args.Length == 2)
Console.WriteLine("Press any key to exit...");
static void CreateMergedPDF(string targetPDF, string sourceDir)
using (FileStream stream = new FileStream(targetPDF, FileMode.Create))
Document pdfDoc = new Document(PageSize.A4);
PdfCopy pdf = new PdfCopy(pdfDoc, stream);
var files = Directory.GetFiles(sourceDir);
Console.WriteLine("Merging files count: " + files.Length);
int i = 1;
foreach (string file in files)
Console.WriteLine(i + ". Adding: " + file);
if (pdfDoc != null)
Console.WriteLine("SpeedPASS PDF merge complete.");
Hope it helps you and have fun.
As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year. Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle. Well, nothing could be further from the truth. I cheered loudly when I first heard that the 2013 Summit would be outside Seattle. I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately). And there is a funny dynamic that takes place when the conference is local. If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule. For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year. Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums. Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel. So I’m just saying, there are really nice aspects of having the conference 2500 miles away.
Beyond that, the training was fantastic as usual. The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen. The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great. This year I was particularly impressed with how well attended were the Professional Development sessions. Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms. I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people. If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings.
Overall, it was a great time as these events always are. It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships). We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule. I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that.
If you missed the Summit this year, start saving now, and register early, so you can join us!
As I type this I’m on an airplane en route to my 12th PASS Summit. It’s been a very busy 3.5 months since my last post on my work as a Board member. Nearing the end of my 2-year term I am struck by how much has happened, and yet how fast the time has gone.
But I’ll save the retrospective post for next time and today focus on what happened in Q3. In the last three months we made progress on several fronts, thanks to the contributions of many volunteers and HQ staff members. They deserve our appreciation for their dedication to delivering for the membership week after week.
The Virtual Chapters continue to provide many PASS members with valuable free training. Between July and September of 2013 VCs hosted over 50 webinars with a total of 4300 attendees.
This quarter also saw the launch of the Security & Global Russian VCs. Both are off to a strong start and I welcome these additions to the Virtual Chapter portfolio.
At the beginning of 2012 we had 14 Virtual Chapters. Today we have 22. This growth has been exciting to see. It has also created a need to have more volunteers help manage the work of the VCs year-round. We have renewed focus on having Virtual Chapter Mentors work with the VC Leaders and other volunteers. I am grateful to volunteers Julie Koesmarno, Thomas LeBlanc and Marcus Bittencourt who join original VC Mentor Steve Simon on this team. Thank you for stepping up to help.
Many improvements to the VC web sites have been rolling out over the past few weeks. Our marketing and IT teams have been busy working a new look-and-feel, features and a logo for each VC. They have given the VCs a fresh, professional look consistent with the rest of the PASS branding, and all VCs now have a logo that connects to PASS and the particular focus of the chapter.
24 Hours of PASS
The Summit Preview edition of 24HOP was held on July 31 and by all accounts was a success. Our first use of the GoToWebinar platform for this event went extremely well. Thanks to our speakers, moderators and sponsors for making this event possible. Special thanks to HQ staffers Vicki Van Damme and Jane Duffy for a smoothly run event.
Coming up: the 24HOP Portuguese Edition will be held November 13-14, followed December 12-13 by the Spanish Edition. Thanks to the Portuguese- and Spanish-speaking community volunteers who are organizing these events.
July Board Meeting
The Board met July 18-19 in Kansas City. The first order of business was the election of the Executive Committee who will take office January 1. I was elected Vice President of Marketing and will join incoming President Thomas LaRock, incoming Executive Vice President of Finance Adam Jorgensen and Immediate Past President Bill Graziano on the Exec Co. I am honored that my fellow Board members elected me to this position and look forward to serving the organization in this role.
Visit to PASS HQ
In late September I traveled to Vancouver for my first visit to PASS HQ, where I joined Tom LaRock and Adam Jorgensen to make plans for 2014. Our visit was just a few weeks before PASS Summit and coincided with the Board election, and the office was humming with activity. I saw first-hand the enthusiasm and dedication of everyone there. In each interaction I observed a focus on what is best for PASS and our members. Our partners at HQ are key to the organization’s success. This week at PASS Summit is a great opportunity for all of us to remember that, and say “thanks.”
PASS Summit—of course! I’ll be around all week and look forward to connecting with many of our member over meals, at the Community Zone and between sessions.
In the evenings you can find me at the Welcome Reception, Exhibitor’s Reception and Community Appreciation Party. And I will be at the Board Q&A session Friday at 12:45 p.m.
The newly elected Exec Co and Board members take office January 1, and the Virtual Chapter portfolio is transitioning to a new director. I’m thrilled that Jen Stirrup will be taking over. Jen has experience as a volunteer and co-leader of the Business Intelligence Virtual Chapter and was a key contributor to the BI VCs expansion to serving our members in the EMEA region. I’ll be working closely with Jen over the next couple of months to ensure a smooth transition.
Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities. <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler> In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work. I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”. But there’s a whole other way to do business today that would be ideal for these ladies to pursue.
As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin. But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special). This capitalizes on the power of social media, niche marketing, and The Long Tail. One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog. I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach.
Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there. I wish them the best of luck with their endeavors!
The 2013 election campaign for the PASS Board of Directors is underway. There are seven qualified candidates running this year. They all offer a wealth of experience volunteering for PASS and the SQL Server community.
One of these candidates, Amy Lewis, asked me to write a reference for her to include on her candidate application. I have a lot of experience working with Amy and was pleased to provide this reference:
I enthusiastically support Amy Lewis as a candidate for the PASS Board of Directors. I have known and worked with Amy in various PASS' volunteer capacities for years, starting when we were both leaders of SIGs (the precursors to the Virtual Chapters.) In that time I have seen Amy grow as a leader, taking on increasing responsibility and developing her leadership skills in the process. From the Program Committee to the BI Virtual Chapter to her local user group's SQL Saturday Amy has demonstrated a capacity to organize and lead volunteers. A successful leader delivers results, and does so in a way that encourages and empowers the people she is working with; Amy embodies this leadership style.
As Director for Virtual Chapters I have most recently worked with Amy in her capacity of DW/BI VC Leader. This VC is one of our largest and most active, and Amy's leadership is a key contribution to that success. I was pleased to see that Amy was also thinking about succession and prepared other volunteers to take over the chapter leadership.
Amy has shown an understanding of PASS' strategic goals and has focused her volunteer efforts to help us reach those goals. For the past couple of years we have been trying to expand PASS reach and relevance to SQL communities around the world. The VCs are a key vehicle for this expansion.
Amy embraced this idea and organized the VC to engage volunteers in Europe & Australia and provide content that could reach SQL professionals in those regions.
A second key strategy for PASS is expanding into the data analytics space. Again Amy rose to the occasion helping to shape the program for our first Business Analytics Conference and leveraging the BI VC to promote the event.
By all measures I think Amy is prepared to serve on the Board and contribute in a positive way.
While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it? The answer is yes, and it's really quite easy to do.
The first step is finding the installed help files. If you have SQL Server 2012, BOL is installed under the Microsoft Help Library. You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager. When the new window pops up click the Settings link, you'll get the following:
You'll see the path under Library Location.
Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store. This is where the help file content is kept if you downloaded it for offline use.
Depending on which products you've downloaded help for, you may see a few hundred files. Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files. We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files.
Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them. When you do, you'll see a few thousand files in the archive. We are only interested in the .htm files, but there's no harm in extracting all of them to a folder. 7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created:
7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm
Well that's great Rob, but how do I put all those files into a full-text index?
I'll tell you in a second, but first we have to set up a few things on the database side. I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation:
CREATE SCHEMA help AUTHORIZATION dbo;
-- Create default fulltext catalog for later FT indexes
CREATE FULLTEXT CATALOG FTC AS DEFAULT;
CREATE TABLE help.files(file_id int not null IDENTITY(1,1)
CONSTRAINT PK_help_files PRIMARY KEY,
path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE,
doc_type varchar(6) DEFAULT('.xml'),
content varbinary(max) not null);
CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033)
KEY INDEX PK_help_files;
This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert. I'll be using the command line again for this, it's the easiest method I know:
for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)"
You'll need to copy and run that as one line in a command prompt. I'll explain what this does while you run it and watch several thousand files get imported:
The "for" command allows you to loop over a collection of items. In this case we want all the .htm files in the D:\SQLHelp folder. For each file it finds, it will assign the full path and file name to the %a variable. In the "do" clause, we'll specify another command to be run for each iteration of the loop. I make a call to "sqlcmd" in order to run a SQL statement. I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection. I then use -Q to run a query that I enclose in double quotes.
The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object. In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier. This process continues for each file in the folder, creating one new row in the table.
And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online! In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so. I believe that will make it even easier.
In the spirit of exploration, I'll leave you to work on some fulltext queries of this content. I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting). There are additional example queries in the download material for my presentation linked above.
Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files. Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords. (You probably noticed this in the default definition for the doc_type column.) You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed.
I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago. He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend. He also has some FTS articles on Simple Talk:
For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required.
The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly.
A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together.
The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for.
To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index)
The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups.
The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve just commented out the join sys.filegroups.
As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.
Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs.
In this quarter I got to attend two conferences and meet more members of the SQL community.
In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events.
The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site.
The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs.
A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you.
Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration.
24 Hours of PASS
The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also.
Business Analytics Conference
April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space.
After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place.
Global SQL Community
Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts.
The Board passed the FY14 budget at the end of June. The budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs.
The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative.
Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails.
Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.