How Not To Outgrow Your DB Infra: A Simple Setup
Whether you're an experienced DBA or about to embark into the world of database administration, one of the focal points of your duties is to ensure that you will not outgrow your current DB system. We all know that but where to start?
It's quite simple, all there is to it is a bit of preparation, baseline and monitoring until you get some statistics to work on.
Part I: Disk Management
DDL
USE [Database_Name] -- change to appropriate admin db GO
CREATE TABLE DBO.DBCapacity( DBName varchar(60) ,DateStamp datetime DEFAULT getdate() ,SizeInGB decimal(18,2) ,CONSTRAINT PK_DBCapacity PRIMARY KEY CLUSTERED(DBName ASC, DateStamp ASC) ) GO
DML
-- procedure to retrieve db sizes CREATE PROCEDURE getDBSizes as
DECLARE @database varchar(50) , @getdb CURSOR , @sqlstmt nvarchar(4000)CREATE TABLE #result (dbname varchar(100),dbsize decimal(18,2))
SET @getdb = CURSOR FOR SELECT name FROM SYS.DATABASES OPEN @getdb FETCH NEXT FROM @getdb INTO @database
WHILE (@@FETCH_STATUS = 0) BEGIN
SELECT @sqlstmt = 'INSERT INTO #result(dbname,dbsize) SELECT ''' + @database + ''',CONVERT(decimal(18,2),(size*8)/1048576.0) FROM ' + @database + '.SYS.SYSFILES WHERE groupid=1' EXEC (@sqlstmt)
FETCH NEXT FROM @getdb INTO @database END
CLOSE @getdb DEALLOCATE @getdb
INSERT INTO dbCapacity(dbName,sizeInGB) SELECT dbname,dbsize FROM #result
DROP TABLE #result GO
–Alternative size source if you are taking regular backups and you really need to come up with the magic number
DECLARE @startdate datetime ,@period varchar(10)
SET @startdate='2007-01-01' --set the start date SET @period='week'SELECT DATEADD(@period,DATEDIFF(@period,0,backup_start_date),0) as DateStamp ,CONVERT(decimal(18,2),SUM(CASE WHEN [type]='D' THEN backup_size ELSE 0 END)/1073741824.0) as DBSize_DataAndLog FROM msdb.dbo.backupset WHERE backup_start_date>=@startdate GROUP BY DATEADD(@period,DATEDIFF(@period,0,backup_start_date),0) ORDER BY DATEADD(@period,DATEDIFF(@period,0,backup_start_date),0) asc
–Retrieving the information CREATE PROCEDURE spGetDBGrowth @dbname varchar(100)=null as
SET NOCOUNT ON
SELECT dbName ,sizeingb as CurrentSize ,(SELECT TOP 1 sizeingb FROM dbcapacity WHERE datestamp<d.datestamp AND dbName=d.dbName ORDER BY datestamp desc) as PreviousSize ,sizeingb -(SELECT TOP 1 sizeingb FROM dbcapacity WHERE datestamp<d.datestamp AND dbName=d.dbName ORDER BY datestamp desc) as GrowthIncrease FROM dbcapacity d WHERE datestamp=(SELECT max(datestamp) FROM dbcapacity WHERE dbName=d.dbName) AND dbName=COALESCE(@dbname,dbName) ORDER BY sizeingb desc
SET NOCOUNT OFF
GO
Now all you need to do is to create a job, add a jobstep to execute getDBSizes and schedule.
I would recommend a cut-off time like off-peak hours over a period, depending where you get your size data.
If it's from the backupset, it doesn't really matter and you don't need to setup the job.
Computing for the growth
I've included a simple procedure to compare the current growth based on the datestamp in the dbCapacity table plus an undocumented procedure to retrieve the current available disk space. So basically you now have the stats and all you need to do is analyze the data.
EXEC spGetDBGrowth --to retrieve growth
EXEC xp_fixeddrives --undocumented procedure to retrieve the current available disk space
PART II: Server Capacity via Performance Monitoring
Here's the second installment and still under construction. I'll be adding to this post so keep an eye on this one.
The procedures at the end each setup are just examples on how you can retrieve the information for your own interpretation
(to help you out on this one, I found this excellent post http://www.sql-server-performance.com/tips/performance_main.aspx)
I am still trying to figure how to handle the different counters are there is little resource on them or maybe I'm looking in the wrong places
(to give you a jump start on the counters http://technet2.microsoft.com/windowsserver/en/library/3fb01419-b1ab-4f52-a9f8-09d5ebeb9ef21033.mspx?mfr=true)
Also, you can add as many counters you want to monitor, these are just some of those that I want to monitor for the meantime
As always, don't forget to create a scheduled job so you can retrieve data in different intervals.
DDL
use database_name
gocreate table PerformanceCounters
( datestamp datetime default getdate()
,ObjectName varchar(30)
,CounterName varchar(50)
,InstanceName varchar(100)
,CntrValue bigint
,CntrType bigint
,constraint [PK_PerformanceCounters]
primary key clustered (datestamp desc ,objectname asc ,countername asc,instancename asc)
)
go
DML
create procedure GetPerformanceCounters
as
set nocount ondeclare @servicename varchar(200)
select @servicename=case when @@servicename<>'mssqlserver' then 'MSSQL$' + @@servicename +':'
else 'sqlserver:' end
insert into PerformanceCounters(objectname,countername,instancename,cntrvalue,cntrtype)
select [object_name]
,counter_name
,coalesce(instance_name,' ')
,cntr_value
,cntr_type
from sys.dm_os_performance_counters
where ([object_name] =@servicename + 'Access Methods' and counter_name in ('Page Splits/sec','Full Scans/sec'))
or ([object_name] =@servicename + 'Buffer Manager' and counter_name in ('Free Pages','Target Pages','Total pages','Buffer cache hit ratio','Buffercache hit ratio base','Page life expectancy','Lazy writes/sec','Checkpoint pages/sec'))
or ([object_name] =@servicename + 'Databases' and counter_name in ('Log Flushes/sec','Log Flush Waits/sec','Log Flush WaitTime','Transactions/sec','Backup/Restore Throughput/sec'))
or ([object_name] =@servicename + 'Locks' and counter_name in ('Average Wait Time (ms)','Number of Deadlocks/sec'))
or ([object_name] =@servicename + 'SQL Statistics' and counter_name in ('SQL Compilations/sec','Batch Requests/sec'))
or ([object_name] =@servicename + 'General Statistics' and counter_name in ('Processes blocked','User Connections'))
or ([object_name] =@servicename + 'Memory Manager' and counter_name in ('Target Server Memory (KB)','Total Server Memory (KB)'))
or ([object_name] =@servicename + 'SQL Errors')
order by counter_name asc
--summarize to retrieve meaningful data, this is ongoing, probably process some statistics data out so reading is easier
set nocount offgo
create procedure spGetPerformanceOverview
@startdate datetime=null
,@stopdate datetime=null
as
set nocount on
-- IO Problems and transaction improvements
select datestamp
,sum(case when countername='page splits/sec' then CntrValue else 0 end) as [Page Splits/sec]
,sum(case when countername='free pages' then CntrValue else 0 end) as [Free Pages]
,sum(case when countername='Target pages' then CntrValue else 0 end) as [Target Pages]
,sum(case when countername='Total pages' then CntrValue else 0 end) as [Total Pages]
,sum(case when countername='Log Flushes/Sec' and instancename='_Total'
then CntrValue else 0 end) as [Total Log Flushes/sec]
,sum(case when countername='Log Flush waits/Sec' and instancename='_Total'
then CntrValue else 0 end) as [Total Log Flush Waits/sec]
,sum(case when countername='Log Flush wait time' and instancename='_Total'
then CntrValue else 0 end) as [Total Log Flush Wait Time]
,sum(case when countername='Full Scans/sec' then CntrValue else 0 end) as [Full Scans/sec]
,sum(case when countername='average wait time (ms)' and Instancename='_Total'
then CntrValue else 0 end)/1000 as [Total Average Wait Time (sec)]
,sum(case when countername='average wait time (ms)' and Instancename='Application'
then CntrValue else 0 end)/1000 as [Application Average Wait Time (sec)]
,sum(case when countername='average wait time (ms)' and Instancename='Database'
then CntrValue else 0 end)/1000 as [Database Average Wait Time (sec)]
,sum(case when countername='number of deadlocks/sec' and Instancename='_Total'
then CntrValue else 0 end) as [Total Number Of Deadlocks/sec]
,sum(case when countername='number of deadlocks/sec' and Instancename='Database'
then CntrValue else 0 end) as [Database Number Of Deadlocks/sec]
,sum(case when countername='number of deadlocks/sec' and Instancename='Application'
then CntrValue else 0 end) as [Application Number Of Deadlocks/sec]
,sum(case when countername='SQL Compilations/sec'
then CntrValue else 0 end) as [SQL Compilations/sec]
,sum(case when countername='Transactions/sec' and instancename='_Total'
then CntrValue else 0 end) as [Total Transactions/sec]
,sum(case when countername='Processes Blocked'
then CntrValue else 0 end) as [Processes Blocked]
from PerformanceCounters p
where datestamp=(select max(datestamp) from performancecounters)
--and (instancename = coalesce(@database,instancename) or instancename is null or instancename='_Total')
and countername in ('page splits/sec','free pages','Target pages','Total pages','Log Flushes/Sec'
,'Log Flush waits/Sec','Log Flush wait time','Full Scans/sec','average wait time (ms)','number of deadlocks/sec'
,'SQL Compilations/sec' ,'Transactions/sec','Processes Blocked'
)
group by datestamp
-- RAM
select datestamp
,sum(case when countername='Target Server Memory (kb)' then CntrValue else 0 end) as [Target Server Memory (KB)]
,sum(case when countername='Total Server Memory (kb)' then CntrValue else 0 end) as [Total Server Memory (KB)]
,convert(decimal(18,2),convert(decimal(18,2),sum(case when countername='Buffer Cache Hit Ratio' then CntrValue else 0 end)) /
sum(case when countername='Buffer Cache Hit Ratio Base' then CntrValue else 0 end))*100 as [Buffer Cache Hit Ratio]
,sum(case when countername='Page Life Expectancy' then CntrValue else 0 end) as [Page Life Expectancy]
,sum(case when countername='Lazy Writes/Sec' then CntrValue else 0 end) as [Lazy Writes/Sec]
,sum(case when countername='CheckPoint Pages/Sec' then CntrValue else 0 end)/1000 as [CheckPoint Pages/Sec]
from PerformanceCounters p
where datestamp=(select max(datestamp) from performancecounters)
--and (instancename = coalesce(@database,instancename) or instancename is null or instancename='_Total')
and countername in ('Target Server Memory (kb)','Total Server Memory (kb)','Buffer Cache Hit Ratio','Buffer Cache Hit Ratio Base'
,'Page Life Expectancy','Lazy Writes/Sec','CheckPoint Pages/Sec')
group by datestamp
-- network
select datestamp
,sum(case when countername='Batch Requests/Sec' then CntrValue else 0 end) as [Batch Requests/Sec]
,sum(case when countername='User Connections' then CntrValue else 0 end) as [User Connections]
from PerformanceCounters p
where datestamp=(select max(datestamp) from performancecounters)
--and (instancename = coalesce(@database,instancename) or instancename is null or instancename='_Total')
and countername in ('Batch Requests/Sec','User Connections')
group by datestamp
-- backups/restores
select datestamp
,instanceName
,sum(case when countername='Backup/Restore Throughput/sec' then CntrValue else 0 end) as [Backup/Restore Throughput/sec]
from PerformanceCounters p
where datestamp=(select max(datestamp) from performancecounters)
--and (instancename = coalesce(@database,instancename) or instancename is null or instancename='_Total')
and countername in ('Backup/Restore Throughput/sec')
group by datestamp,instancename
set nocount off
GO
Don't forget the the server is as good as the queries you let it process.
So it's always a matter of which-comes-first-egg-or-chicken case.
Legacy Comments
Mladen
2007-11-15 |
re: How Not To Outgrow Your DB Infra: A Simple Setup nice! welcome to the club! |
north face jackets on sale
2010-10-24 |
re: How Not To Outgrow Your DB Infra: A Simple Setup It's quite simple, all there is to it is a bit of preparation, baseline and monitoring until you get some statistics to work on. columbia jackets | snow boots | snow boots for women | columbia sportswear | columbia sportswear outlet | cheap north face jackets | the north face outlet | mac makeup | cheap makeup |