Tuesday, November 2, 2010

SQLSaturday #58 Presentation

Last Friday I had the opportunity to present a track on Table and index partitioning at SQLSaturday #58 (yes, SQLSaturday on a Friday) in Minnesota.

I had a great time, met a lot of great people and found the entire experience very enjoyable; I want to present more.

I said I'd put my queries up on my blog; so here it is.

Script 1 which was here is how to create a very basic partition and add values to it and move it (create a folder on your c drive called: PartitionDB prior to starting


/**************************************
** Database Partitioning Script **
** Eric Zierdt, W3i **
** http://ericemployed.blogspot.com **
** http://blog.w3i.com **
**************************************/
USE master
GO
CREATE DATABASE PartitionDB ON
PRIMARY(NAME = db_data, FILENAME = 'c:\PartitionDB\db.mdf' ,SIZE = 25MB)
, FILEGROUP FG1( NAME = FG1_data, FILENAME = 'c:\PartitionDB\FG1.ndf' ,SIZE = 25MB)
, FILEGROUP FG2( NAME = FG2_data, FILENAME = 'c:\PartitionDB\FG2.ndf', SIZE = 25MB)
, FILEGROUP FG3( NAME = FG3_data, FILENAME = 'c:\PartitionDB\FG3.ndf', SIZE = 25MB)
, FILEGROUP FG4( NAME = FG4_data, FILENAME = 'c:\PartitionDB\FG4.ndf', SIZE = 25MB)
, FILEGROUP FG5( NAME = FG5_data, FILENAME = 'c:\PartitionDB\FG5.ndf', SIZE = 25MB)
, FILEGROUP FGDefault( NAME = FGDefault_data, FILENAME = 'c:\PartitionDB\FGDefault.ndf', SIZE = 25MB)
LOG ON( NAME = db_log, FILENAME = 'c:\PartitionDB\log.ldf', SIZE = 1MB,FILEGROWTH = 100MB ) ;
GO
USE PartitionDB
GO

CREATE PARTITION FUNCTION EZ_PartitionFunction (INT) AS
RANGE RIGHT FOR VALUES (1000, 2000, 3000, 4000, 5000);

CREATE PARTITION SCHEME EZ_PartitionScheme AS
PARTITION EZ_PartitionFunction TO([FGDefault],[FG1], [FG2], [FG3], [FG4], [FG5])
GO


CREATE TABLE EZ_PartionedTable
( ID int PRIMARY KEY,
Name VARCHAR(50)
)
ON EZ_PartitionScheme(ID)


INSERT INTO EZ_PartionedTable (ID,NAME)
VALUES (1,'Test')
,(1001,'Test')
,(2001,'Test')
,(3001,'Test')
,(4001,'Test')
,(5001,'Test')
,(6001,'Test')
,(7001,'Test')
,(8001,'Test')

USE [master]
GO
ALTER DATABASE [PartitionDB] ADD FILEGROUP [FG6]
GO
ALTER DATABASE [PartitionDB] ADD FILE ( NAME = N'FG6_data', FILENAME = N'C:\PartitionDB\FG6.ndf' , SIZE = 25MB , FILEGROWTH = 5MB )
TO FILEGROUP [FG6]
GO
USE PartitionDB
GO

-- Adding a new Scheme and Function
ALTER PARTITION SCHEME EZ_PartitionScheme NEXT USED [FG6]

ALTER PARTITION FUNCTION EZ_PartitionFunction() SPLIT RANGE (6000)



-- Removing a Scheme and Function
ALTER PARTITION FUNCTION EZ_PartitionFunction() MERGE RANGE (6000)





-- Adding a new Scheme and Function
ALTER PARTITION SCHEME EZ_PartitionScheme NEXT USED [FG6]

ALTER PARTITION FUNCTION EZ_PartitionFunction() SPLIT RANGE (6000)

-- Removing a Scheme and Function out of Sequence
ALTER PARTITION FUNCTION EZ_PartitionFunction ()MERGE RANGE (5000)


ALTER PARTITION SCHEME EZ_PartitionScheme NEXT USED [FG5]

ALTER PARTITION FUNCTION EZ_PartitionFunction() SPLIT RANGE (5000)

USE master
GO



Note that the USE Master at the bottom is so that in the next script (which I had in different files/tabs) when you drop the DB, you don't have an error about someone being connected.

Script 2 was used to demonstrate large volume of data being merged and split and how quickly the data goes from one table to another when you use the ALTER TABLE - SWITCH. When I gave the presentation, I had a subfolder created previously that had a Database with 8 Million rows already created. This wont work over the blog, so you'll need to create the table and dump in the 8M rows yourself....So I'm taking out the part about dropping the table and attaching the other database.


/**************************************
** Database Partitioning Script **
** Eric Zierdt, W3i **
** http://ericemployed.blogspot.com **
** http://blog.w3i.com **
**************************************/

/*** DROP TABLE AND PARTITION ***/
USE PartitionDB
DROP TABLE EZ_PartionedTable
DROP PARTITION SCHEME EZ_PartitionScheme
DROP PARTITION FUNCTION EZ_PartitionFunction
ALTER DATABASE [PartitionDB] REMOVE FILE [FG6_data]
GO
ALTER DATABASE [PartitionDB] REMOVE FILEGROUP [FG6]
GO

/*** CREATE NEW PARTITION FUNCTION BASED ON MILLION RECORDS ***/
CREATE PARTITION FUNCTION EZ_PartitionFunction (INT) AS
RANGE RIGHT FOR VALUES (1000000, 2000000, 3000000, 4000000, 5000000);

/*** CREATE NEW PARTITION SCHEME ***/
CREATE PARTITION SCHEME EZ_PartitionScheme AS
PARTITION EZ_PartitionFunction TO([FGDefault],[FG1], [FG2], [FG3], [FG4], [FG5])
GO

/*** CREATE NEW TABLE ***/
CREATE TABLE EZ_PartionedTable
( ID int NOT NULL IDENTITY (1000000, 1),
Name VARCHAR(50)
)
ON EZ_PartitionScheme(ID)

/*** ADD PK TO TABLE ***/
ALTER TABLE dbo.EZ_PartionedTable ADD CONSTRAINT
PK_EZ_PartionedTable PRIMARY KEY CLUSTERED
(
ID
) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON EZ_PartitionScheme(ID)

GO

/*** LOAD 8 MILLION RECORDS...Go get some food, this will take a while... ***/
DECLARE @i INT = 1
WHILE @i <= 8000000
BEGIN
INSERT INTO EZ_PartionedTable (NAME)
VALUES ('Test')
SET @i += 1
END


/*** ADD A NEW FILEGROUP TO THE DATABASE ***/
USE [master]
GO
ALTER DATABASE [PartitionDB] ADD FILEGROUP [FG6]
GO
ALTER DATABASE [PartitionDB] ADD FILE ( NAME = N'FG6_data', FILENAME = N'C:\PartitionDB\FG6.ndf' , SIZE = 25MB , FILEGROWTH = 5MB )
TO FILEGROUP [FG6]
GO
USE PartitionDB
GO



/*** ADD A NEW FILEGROUP TO THE PARTITION ***/
ALTER PARTITION SCHEME EZ_PartitionScheme NEXT USED [FG6]

ALTER PARTITION FUNCTION EZ_PartitionFunction() SPLIT RANGE (6000000)
--NOTE THE LOG WILL GROW SIGNIFIGANTLY (look in your data directory, the log will grow to around 1.2 GB)
--The point of this is to say, that you need to be careful when using SPLIT and MERGE..doing it to a an empty range is ok, but if the range has data, it's dangerous

/*** SHRINK LOG ***/
USE [PartitionDB]
GO
DBCC SHRINKFILE (N'db_log' , 1)
GO

/*** REMOVE FILEGROUP FROM THE PARTITION ***/
ALTER PARTITION FUNCTION EZ_PartitionFunction ()MERGE RANGE (6000000)

/*** SHRINK LOG ***/
USE [PartitionDB]
GO
DBCC SHRINKFILE (N'db_log' , 1)
GO

/*** SHRINK LOG ***/
USE [PartitionDB]
GO
DBCC SHRINKFILE (N'FG6_data' , 25)
GO

/*** CREATE STAGE TABLE FOR SWITCH ***/
CREATE TABLE EZ_PartionedTable_Stage
( ID int NOT NULL IDENTITY (1000000, 1),
Name VARCHAR(50)
)
ON EZ_PartitionScheme(ID)

ALTER TABLE dbo.EZ_PartionedTable_Stage ADD CONSTRAINT
PK_EZ_PartionedTable_Stage PRIMARY KEY CLUSTERED
(
ID
) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON EZ_PartitionScheme(ID)
GO

--SHOW View Table Partition Data and explain what SWITCH DOES
ALTER TABLE EZ_PartionedTable SWITCH PARTITION 6 TO EZ_PartionedTable_Stage PARTITION 6

ALTER TABLE EZ_PartionedTable_Stage SWITCH PARTITION 6 TO EZ_PartionedTable PARTITION 6

--SHOW Dropping of data and removing old file group.
ALTER TABLE EZ_PartionedTable SWITCH PARTITION 6 TO EZ_PartionedTable_Stage PARTITION 6
DROP TABLE EZ_PartionedTable_Stage

ALTER DATABASE [PartitionDB] REMOVE FILE [FG6_data]
GO
ALTER DATABASE [PartitionDB] REMOVE FILEGROUP [FG6]
GO

/*** SHRINK LOG ***/
USE [PartitionDB]
GO
DBCC SHRINKFILE (N'FG5_data' , 25)
GO

USE master
GO


Script 3 is a real life example of how my company uses Partitioning. It uses a date based range; and shows the process we use to move and drop the oldest data:


/**************************************
** Database Partitioning Script **
** Eric Zierdt, W3i **
** http://ericemployed.blogspot.com **
** http://blog.w3i.com **
**************************************/

USE master
GO
ALTER DATABASE [PartitionDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
DROP DATABASE PartitionDB
GO

/*** CREATE DB WITH FILEGROUP NAMES THAT ARE DATE BASED - AUGUST, SEPTEMBER AND OCTOBER 2010 ***/
CREATE DATABASE PartitionDB ON
PRIMARY(NAME = db_data, FILENAME = 'c:\PartitionDB\db.mdf' ,SIZE = 4MB)
, FILEGROUP TRANDTA_201008 ( NAME = TRANDTA_201008, FILENAME = 'c:\PartitionDB\TRANDTA_201008.ndf' ,SIZE = 25MB)
, FILEGROUP TRANDTA_201009( NAME = TRANDTA_201009, FILENAME = 'c:\PartitionDB\TRANDTA_201009.ndf', SIZE = 25MB)
, FILEGROUP TRANDTA_201010( NAME = TRANDTA_201010, FILENAME = 'c:\PartitionDB\TRANDTA_201010.ndf', SIZE = 25MB)
, FILEGROUP TRANDEFAULT( NAME = TRANDEFAULT_data, FILENAME = 'c:\PartitionDB\FGDefault.ndf', SIZE = 25MB)
LOG ON( NAME = db_log, FILENAME = 'c:\PartitionDB\log.ldf', SIZE = 1MB,FILEGROWTH = 100MB ) ;
GO
USE PartitionDB
GO

/*** MAKE A PARTITION FUNCTION WITH RANGES FOR 08,09,10 2010 ***/
CREATE PARTITION FUNCTION [EZ_PartitionFunction](DATETIME) AS RANGE RIGHT FOR VALUES (
N'2010-08-01'
, N'2010-09-01'
, N'2010-10-01')
GO

/*** BIND THE SCHEME TO THE FUNCTION ***/
CREATE PARTITION SCHEME EZ_PartitionScheme AS
PARTITION EZ_PartitionFunction TO(
[TRANDEFAULT]
, [TRANDTA_201008]
, [TRANDTA_201009]
, [TRANDTA_201010])
GO

/*** CREATE A TABLE AND BIND IT TO THE PARTITION ***/
CREATE TABLE EZ_PartionedTable
( ID int,
DateAdded DATETIME PRIMARY KEY,
Name VARCHAR(50)
)
ON EZ_PartitionScheme(DateAdded)

/*** INSERT SOME DATA INTO THE NEW TABLE, 3 RECORDS PER MONTH ***/
INSERT INTO EZ_PartionedTable (ID,DateAdded, NAME)
VALUES (1,'08/01/2010', 'Test')
,(2,'08/10/2010','Test')
,(3,'08/31/2010','Test')
,(4,'09/01/2010','Test')
,(5,'09/15/2010','Test')
,(6,'09/30/2010 23:59:59','Test')
,(7,'10/01/2010','Test')
,(8,'10/07/2010','Test')
,(9,'10/22/2010','Test')



/*** On around the 25th of each month, we fire off an agent job that calls a
proc that makes a new month filegroup, file, function and scheme ***/
USE [master]
GO
ALTER DATABASE [PartitionDB] ADD FILEGROUP [TRANDTA_201011]
GO
ALTER DATABASE [PartitionDB] ADD FILE ( NAME = N'TRANDTA_201011', FILENAME = N'C:\PartitionDB\TRANDTA_201011.ndf' , SIZE = 25MB , FILEGROWTH = 5MB )
TO FILEGROUP [TRANDTA_201011]
GO
USE PartitionDB
GO

ALTER PARTITION SCHEME EZ_PartitionScheme NEXT USED [TRANDTA_201011]

ALTER PARTITION FUNCTION EZ_PartitionFunction() SPLIT RANGE ('11/01/2010')


/*** INSERT A RECORD FOR OCTOBER AND CHECK THE VIEW PARTITION SCRIPT ***/
INSERT INTO EZ_PartionedTable (ID,DateAdded, NAME)
VALUES (1,'10/31/2010', 'Test')

/*** INSERT A RECORD FOR NOVEMBER AND CHECK THE VIEW PARTITION SCRIPT ***/
INSERT INTO EZ_PartionedTable (ID,DateAdded, NAME)
VALUES (2,'11/01/2010', 'Test')


/*** Within the first few days of the month we verify that the partitions are correct, then we run a proc that does this: ***/
-- MAKE A STAGING TABLE THAT IS EXACTLY THE SAME AS THE MAIN TABLE
CREATE TABLE EZ_PartionedTable_Stage
( ID int,
DateAdded DATETIME PRIMARY KEY,
Name VARCHAR(50)
)
ON EZ_PartitionScheme(DateAdded)

/*** SWITCH OUT THE OLDEST MONTH'S PARTITION DATA TO THE STAGING TABLE ***/
ALTER TABLE EZ_PartionedTable SWITCH PARTITION 2 TO EZ_PartionedTable_Stage PARTITION 2

/**** BCP OUT DATA (NOT SHOWN HERE) ****/

/*** DROP STAGING DATA (AND ALL THE DATA FROM THE OLDEST MONTH WITH IT) ***/
DROP TABLE EZ_PartionedTable_Stage

/*** DROP THE FUNCTION RANGE FOR THE OLDEST MONTH ***/
ALTER PARTITION FUNCTION EZ_PartitionFunction() MERGE RANGE ('8/01/2010')
GO
/*** DROP THE FILEGROUP AND FILE FOR THE OLDEST MONTH ***/
ALTER DATABASE [PartitionDB] REMOVE FILE [TRANDTA_201008]
GO
ALTER DATABASE [PartitionDB] REMOVE FILEGROUP [TRANDTA_201008]
GO


--DMV's that are useful:

SELECT *
FROM sys.partition_functions

SELECT *
FROM sys.partition_schemes
-- Joins to Function on function_id

SELECT *
FROM sys.partition_range_values
-- Joins to Function on function_id

SELECT *
FROM sys.indexes
-- Joins to schemes on data_space_id

SELECT *
FROM sys.data_spaces
-- Joins to indexes on data_space_id

SELECT *
FROM sys.filegroups
-- Joins to indexes on data_space_id


The final script was a script to show how many records are in each filegroup/range; I find it is very useful for me for many different situations.


/**************************************
** Database Partitioning Script **
** Eric Zierdt, W3i **
** http://ericemployed.blogspot.com **
** http://blog.w3i.com **
**************************************/

USE PartitionDB
DECLARE @TableName VARCHAR(150) = '%%'
,@FileGroup VARCHAR(150) = '%%'

SELECT OBJECT_NAME(p.object_id) AS TableName,
i.name AS IndexName,
p.partition_number AS PartitionNumber,
fg.name AS FileGroupName,
prv_left.value AS LowerBoundaryValue,
prv_right.value AS UpperBoundaryValue,
p.rows AS Rows
FROM sys.partitions AS p
JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id
JOIN sys.data_spaces AS ds ON ds.data_space_id = i.data_space_id
JOIN sys.partition_schemes AS ps ON ps.data_space_id = ds.data_space_id
JOIN sys.partition_functions AS pf ON pf.function_id = ps.function_id
JOIN sys.destination_data_spaces AS dds2 ON dds2.partition_scheme_id = ps.data_space_id AND dds2.destination_id = p.partition_number
JOIN sys.filegroups AS fg ON fg.data_space_id = dds2.data_space_id
LEFT JOIN sys.partition_range_values AS prv_left ON ps.function_id = prv_left.function_id AND prv_left.boundary_id = p.partition_number - 1
LEFT JOIN sys.partition_range_values AS prv_right ON ps.function_id = prv_right.function_id AND prv_right.boundary_id = p.partition_number
WHERE OBJECTPROPERTY(p.object_id, 'ISMSShipped') = 0
AND OBJECT_NAME(p.object_id) LIKE @TableName
AND fg.name LIKE @FileGroup
ORDER BY TableName, IndexName, PartitionNumber;

USE master

Thursday, July 8, 2010

Log Shipping error

I had a problem where I noticed on my primary log shipping machine (the one the data originates on) that the LS_Alerts job was failing. It said that a new database we were working on was failing. I also noticed that I had two jobs that didn't belong on that machine..I had a LS_Copy and a LS_Restore...which should have been on the secondary machine. I believe that another admin set it up from the development machine during the initial phase for some reason, but I couldn't find a source machine. I delete the two jobs from the Job manager, but the Alert job still showed failures when run. I decided to figure out what was the problem, and this is what I did to look into it:

I saw that the alert job ran the proc sys.sp_check_log_shipping_monitor_alert, so I opened up the proc to see what it was doing. It calls the following:


(select primary_server
,primary_database
,isnull(threshold_alert, 14420)
,backup_threshold
,datediff(minute, last_backup_date_utc, @curdate_utc)
,cast(0 as int)
from msdb.dbo.log_shipping_monitor_primary
where threshold_alert_enabled = 1
and datediff(minute, last_backup_date_utc, @curdate_utc) > backup_threshold)
union
(select secondary_server
,secondary_database
,isnull(threshold_alert, 14421)
,restore_threshold
,datediff(minute, last_restored_date_utc, @curdate_utc)
,isnull(last_restored_latency,0)
from msdb.dbo.log_shipping_monitor_secondary
where threshold_alert_enabled = 1
and (datediff(minute, last_restored_date_utc, @curdate_utc) > restore_threshold
or last_restored_latency > restore_threshold))



and raises an error if there are records.

When I ran it on my primary machine the second statement (from log_shipping_monitor_secondary) had a record in it; but the odd thing was that when I just did a select * against the table, I saw that the primary server for that statement was the same server as the secondary server...so it was set to logship to its self. From that point, it was simple, delete the record and the error went away.

Hope that helps someone in the future.

Thursday, July 1, 2010

Scary Script

This has to be one of the scariest scripts I have ever seen...my co-worker sent it to me. It is used to delete all data from all tables in a given database.


-- change me!
USE [MyDatabase]

-- don't touch below here

-- disable all constraints
EXEC sp_MSForEachTable "ALTER TABLE ? NOCHECK CONSTRAINT all"

-- delete data in all tables
EXEC sp_MSForEachTable "DELETE FROM ?"

-- enable all constraints
EXEC sp_MSForEachTable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"

-- reseed identity columns
EXEC sp_MSForEachTable "DBCC CHECKIDENT ( '?', RESEED, 0)"

-- check to see if any tables have rows
-- this could probably be handled better
CREATE TABLE #CheckRows (TableName VARCHAR(60), NumRows INT)
EXEC sp_MSforeachtable "INSERT INTO #CheckRows SELECT '?',COUNT(1) FROM ?"
SELECT * FROM #CheckRows WHERE NumRows > 0
DROP TABLE #CheckRows

Thursday, May 27, 2010

Storing Login Errors

In an effor to post more often, I'm posting something very simple. I record (mainly for auditing reasons) all login failures. Nightly (at 11:59) I store the days errors in a table. This is how I do that:


-- =================================================================
-- Author: Eric Zierdt
-- Create date: 09/24/2009
-- Description: Records daily Login Errors
-- exec DBA_INSERT_LOGIN_AUDIT
-- =================================================================

CREATE PROCEDURE [dbo].[DBA_INSERT_LOGIN_AUDIT]

AS
BEGIN

CREATE TABLE #temp_login_trace
(LogDate DATETIME,
ProcessInfo VARCHAR(100),
[Text] VARCHAR(MAX))

INSERT #temp_login_trace
EXEC xp_readerrorlog 0,1,'Login'

INSERT INTO dbo.DBA_LOGIN_AUDIT
SELECT LogDate
,@@Servername
,ProcessInfo
,[Text]
FROM #temp_login_trace
WHERE LogDate > CAST(GETDATE() AS Date)

DROP TABLE #temp_login_trace

END

Wednesday, May 5, 2010

Feature Add...Please

How many times has it happened to you that you are running some long running query and you need to close your connection (shut down your computer)? I wish there was a way to keep the request alive even if the connection is broken...

If anyone out there knows of a way to do this, let me know!

Friday, April 16, 2010

Kill All SPIDs for given Databases

So as promised in my last post, I'm going to include a script that will loop over a list of databases and kill all SPIDS connected to them. If you want to use this, you'll need to install the SP_WHO3 proc and the StringToTable function that I posted in the past two posts.


DECLARE @DBsToKill VARCHAR(MAX) = 'DB1,DB2,DB3' --List of DBs here, comma delimited
,@KillTest VARCHAR(20) = 'TEST' --TEST=print SPIDs; KILL=Kill SPIDs
,@SQL VARCHAR(MAX)

SET @SQL = '
SET NOCOUNT ON
CREATE TABLE #SpidsToKill (
SPID VARCHAR(5)
,[Status] VARCHAR(max)
,[login] VARCHAR(max)
,[HostName] VARCHAR(max)
,BlkBy VARCHAR(max)
,DBName VARCHAR(100)
,Command VARCHAR(max)
,CPUTime VARCHAR(max)
,DiskIO VARCHAR(max)
,LastBatch VARCHAR(MAX)
,ProgramName VARCHAR(MAX)
,SPID2 VARCHAR(max)
,RequestID VARCHAR(max)
)
'
DECLARE @DBName VARCHAR(100)

DECLARE Z_CURSOR CURSOR FOR
SELECT TableValue
FROM dbo.StringToTable(@DBsToKill,',')

OPEN Z_CURSOR
FETCH NEXT FROM Z_CURSOR
INTO @DBName
WHILE @@FETCH_STATUS = 0
BEGIN

SET @SQL += '
INSERT INTO #SpidsToKill

( SPID
,Status
,login
,HostName
,BlkBy
,DBName
,Command
,CPUTime
,DiskIO
,LastBatch
,ProgramName
,SPID2
,RequestID
)
EXEC sp_who3 ''' + @DBName + '''
'

FETCH NEXT FROM Z_CURSOR
INTO @DBName
END
CLOSE Z_CURSOR
DEALLOCATE Z_CURSOR

IF (@KillTest = 'Kill')
BEGIN
SET @SQL += '
DECLARE @SPID VARCHAR(4)
,@InternalSQL VARCHAR(MAX) = ''''

DECLARE Z_CURSOR CURSOR FOR
SELECT SPID
FROM #SpidsToKill

OPEN Z_CURSOR
FETCH NEXT FROM Z_CURSOR
INTO @SPID
WHILE @@FETCH_STATUS = 0
BEGIN
SET @InternalSQL += ''
KILL '' + @SPID

FETCH NEXT FROM Z_CURSOR
INTO @SPID
END
CLOSE Z_CURSOR
DEALLOCATE Z_CURSOR

exec(@InternalSQL)
--PRINT @InternalSQL
'
END
ELSE
BEGIN
SET @SQL += 'SELECT * FROM #SpidsToKill ORDER BY DBName, SPID'

END
SET @SQL += '
DROP TABLE #SpidsToKill'

--PRINT @SQL

EXEC(@SQL)




Returning SPIDS for a given Database

I took the SP_WHO2 proc and altered it so that it could take in a parameter containing a database name and return only SPIDs that are active in that database. I pretty much just use this when I'm going to be restoring backups and need to have the database without any connections to it. However in my next post, we'll make use of this proc and the string to table proc I discussed in yesterdays post and use them both to create a script to kill all spids in a list of databases.


USE [master]
GO
/****** Object: StoredProcedure [dbo].[sp_who3] Script Date: 04/16/2010 15:23:09 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO



ALTER procedure [dbo].[sp_who3] --- 1995/11/03 10:16
@dbname sysname = NULL
as

set nocount on

declare
@retcode int

declare
@sidlow varbinary(85)
,@sidhigh varbinary(85)
,@sid1 varbinary(85)
,@dbid INT
,@spidlow int
,@spidhigh int

declare
@charMaxLenLoginName varchar(6)
,@charMaxLenDBName varchar(6)
,@charMaxLenCPUTime varchar(10)
,@charMaxLenDiskIO varchar(10)
,@charMaxLenHostName varchar(10)
,@charMaxLenProgramName varchar(10)
,@charMaxLenLastBatch varchar(10)
,@charMaxLenCommand varchar(10)

declare
@charsidlow varchar(85)
,@charsidhigh varchar(85)
,@charspidlow varchar(11)
,@charspidhigh varchar(11)

-- defaults

select @retcode = 0 -- 0=good ,1=bad.
select @sidlow = convert(varbinary(85), (replicate(char(0), 85)))
select @sidhigh = convert(varbinary(85), (replicate(char(1), 85)))

select
@spidlow = 0
,@spidhigh = 32767

--------------------------------------------------------------
IF (@dbname IS NULL) --Simple default to all LoginNames.
GOTO LABEL_17PARM1EDITED

-- select @sid1 = suser_sid(@dbname)
select @dbid = null
if exists(select * from sys.databases WHERE name = @dbname)
select @dbid = database_id from sys.databases WHERE name = @dbname

IF (@dbid IS NOT NULL) --Parm is a recognized login name.
begin
GOTO LABEL_17PARM1EDITED
end

--------

raiserror(15007,-1,-1,@dbname)
select @retcode = 1
GOTO LABEL_86RETURN


LABEL_17PARM1EDITED:


-------------------- Capture consistent sysprocesses. -------------------

select

spid
,status
,sid
,hostname
,program_name
,cmd
,cpu
,physical_io
,blocked
,dbid
,convert(sysname, rtrim(loginame))
as loginname
,spid as 'spid_sort'

, substring( convert(varchar,last_batch,111) ,6 ,5 ) + ' '
+ substring( convert(varchar,last_batch,113) ,13 ,8 )
as 'last_batch_char'
,request_id

into #tb1_sysprocesses
from sys.sysprocesses with (nolock)

if @@error <> 0
begin
select @retcode = @@error
GOTO LABEL_86RETURN
end


--------Prepare to dynamically optimize column widths.


select
@charsidlow = convert(varchar(85),@sidlow)
,@charsidhigh = convert(varchar(85),@sidhigh)
,@charspidlow = convert(varchar,@spidlow)
,@charspidhigh = convert(varchar,@spidhigh)



select
@charMaxLenLoginName =
convert( varchar
,isnull( max( datalength(loginname)) ,5)
)

,@charMaxLenDBName =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),db_name(dbid))))) ,6)
)

,@charMaxLenCPUTime =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),cpu)))) ,7)
)

,@charMaxLenDiskIO =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),physical_io)))) ,6)
)

,@charMaxLenCommand =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),cmd)))) ,7)
)

,@charMaxLenHostName =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),hostname)))) ,8)
)

,@charMaxLenProgramName =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),program_name)))) ,11)
)

,@charMaxLenLastBatch =
convert( varchar
,isnull( max( datalength( rtrim(convert(varchar(128),last_batch_char)))) ,9)
)
from
#tb1_sysprocesses
where
spid >= @spidlow
and spid <= @spidhigh



--------Output the report.


EXEC(
'
SET nocount off

SELECT
SPID = convert(char(5),spid)

,Status =
CASE lower(status)
When ''sleeping'' Then lower(status)
Else upper(status)
END

,Login = substring(loginname,1,' + @charMaxLenLoginName + ')

,HostName =
CASE hostname
When Null Then '' .''
When '' '' Then '' .''
Else substring(hostname,1,' + @charMaxLenHostName + ')
END

,BlkBy =
CASE isnull(convert(char(5),blocked),''0'')
When ''0'' Then '' .''
Else isnull(convert(char(5),blocked),''0'')
END

,DBName = substring(case when dbid = 0 then null when dbid <> 0 then db_name(dbid) end,1,' + @charMaxLenDBName + ')
,Command = substring(cmd,1,' + @charMaxLenCommand + ')

,CPUTime = substring(convert(varchar,cpu),1,' + @charMaxLenCPUTime + ')
,DiskIO = substring(convert(varchar,physical_io),1,' + @charMaxLenDiskIO + ')

,LastBatch = substring(last_batch_char,1,' + @charMaxLenLastBatch + ')

,ProgramName = substring(program_name,1,' + @charMaxLenProgramName + ')
,SPID = convert(char(5),spid) --Handy extra for right-scrolling users.
,REQUESTID = convert(char(5),request_id)
from
#tb1_sysprocesses --Usually DB qualification is needed in exec().
where
dbid = ' + @dbid + '

-- (Seems always auto sorted.) order by spid_sort

SET nocount on
'
)


LABEL_86RETURN:


if (object_id('tempdb..#tb1_sysprocesses') is not null)
drop table #tb1_sysprocesses

return @retcode -- sp_who2




Thursday, April 15, 2010

Converting a delimited string into a table

This isn't anything new, but we will be building upon this in the next post. At times you want to take a delimited string and have it returned in a table format so you can loop over it or do stuff with it; many people have done this before and I'm sure that I'm drawing on stuff I've seen before. This function works well, you call it like this:

SELECT *
FROM dbo.StringToTable('a,b,c,d',',')


Where the first parameter is the string to be converted and the second is the delimiter


USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

--/***********************************************************************
--***
--*** Function: [StringToTable]
--*** Purpose: Takes a delimited string and returns a table
--***
--***
--*** Author: Eric Zierdt
--*** Date Created: 2010-04-15
--***
--*** Revision History
--*** Date Author Description
--*** 2010-04-15 ez Created
--*** select * from dbo.StringToTable('a,b,c,d',',')
--***********************************************************************/

CREATE FUNCTION [dbo].[StringToTable]
(
@String VARCHAR(MAX),
@Delimiter CHAR(1) = ','
)
RETURNS @ReturnList TABLE
(
ID INT,
TableValue VARCHAR(MAX)
)
AS BEGIN
DECLARE @ArrayList TABLE
(
ID INT, TableValue VARCHAR(MAX)
)
DECLARE @Value VARCHAR(MAX)
DECLARE @NextDelimiterLocation INT

----------------------------
-- REMOVE LEADING DELIMITER --
----------------------------
SET @String = LTRIM(RTRIM(@String)) +
CASE WHEN RIGHT(@String, 1) = @Delimiter THEN ''
ELSE @Delimiter
END

-------------------------------------------
-- SET THE LOCATION OF THE FIRST DELIMITER --
-------------------------------------------
SET @NextDelimiterLocation = ISNULL(CHARINDEX(@Delimiter, @String, 1), 0)

----------------------------------------
-- IF NO DELIMITER THEN SET THE WHOLE --
-- STRING TO THE FIRST ROW OF THE TABLE --
----------------------------------------
IF @NextDelimiterLocation = 0
BEGIN
INSERT INTO @ArrayList
(
ID
,TableValue
)
SELECT 1, @String
END
ELSE
BEGIN
DECLARE @Counter INT
SET @Counter = 0
WHILE @NextDelimiterLocation > 0
BEGIN
-----------------------------------------------------
-- SET THE VALUE PARAM TO DATA BEFORE THE DELIMITER --
-- THEN LOOP TO DO THE SAME FOR THE REST OF THE LIST --
-----------------------------------------------------
SET @Value = LTRIM(RTRIM(LEFT(@String, @NextDelimiterLocation - 1)))
IF LEN(@String) > 0 AND @NextDelimiterLocation <= LEN(@String)
BEGIN
SET @Counter = @Counter + 1
INSERT INTO @ArrayList ( ID, TableValue )
SELECT @Counter, @Value
END
SET @String = SUBSTRING(@String, @NextDelimiterLocation + LEN(@Delimiter), LEN(@String))
SET @NextDelimiterLocation = CHARINDEX(@Delimiter, @String, 1)
END
END
INSERT @ReturnList
(
ID, TableValue
)
SELECT ID, TableValue
FROM @ArrayList

RETURN
END



Tuesday, February 16, 2010

Delay run until restore is done.

I had a problem the a while back where I noticed a job kept dieing because it was pulling data from a log shipped database and it was trying to pull that data while the database was restoring it's log file. So I came up with this little sp to fire right before I execute my code. What this does is, if the datase is restoring (State = 1) it performs a loop and every 5 seconds checks back to see if it is still restoring and does this until it is no longer restoring. Since I'm executing this before I do a EXEC (@SQL) it will delay the run of my code until the database is ready.



This can be edited to work for other states, or all non ONLINE states by simply changing the while loop to from @IsRestoring = 1 to @IsRestoring <> 0

I wish there was a better way of formatting this, my code doesn't look this crappy


SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO

/**********************************
** Run_After_Restore_Completes **
** Created by Eric Zierdt **
**********************************/

ALTER PROCEDURE [dbo].[Run_After_Restore_Completes] (@DBName VARCHAR(155))
AS

DECLARE @IsRestoring BIT

SET NOCOUNT ON

-- RUN IF THE DATABASE EXISTS
IF EXISTS (SELECT name FROM sys.databases WHERE name = @DBName)
BEGIN
-- CHECK THE STATE OF THE DATABASE
SELECT @IsRestoring = State
FROM sys.databases
WHERE NAME = @DBName

-- LOOP UNTIL DATABASE IS NO LONGER RESTORING
WHILE @IsRestoring = 1
BEGIN
-- SET A DELAY SO IT ISN'T RUNNING CONSTANTLY
WAITFOR DELAY '00:00:05'
SELECT @IsRestoring = State FROM sys.databases WHERE NAME = @DBName
END
END

Wednesday, February 10, 2010

@@IDENTITY vs. SCOPE_IDENTITY()

So, I'm not doing a great job of posting, sorry about that.





This isn't new, but I wanted a proof of concept so I created a quick query to show how using @@IDENTITY won't work if you are inserting into a table that has a trigger on it which inserts into a second table. Instead you want to use SCOPE_IDENTITY()






BEGIN TRAN
CREATE TABLE TempA (ID INT IDENTITY (1, 1), TxtData VARCHAR(55))
CREATE TABLE TempB (ID INT IDENTITY (1000, 1), TAID INT, TextData VARCHAR(55))
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [dbo].[TempA_T01]
ON [dbo].[TempA]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
INSERT INTO TempB
(
TAID
, TextData
)
SELECT
ID
,'System'
FROM INSERTED
END

INSERT INTO TempA
( TxtData )
VALUES ( 'Test' -- TxtData - varchar(55)
)
SELECT @@IDENTITY
SELECT SCOPE_IDENTITY()
SELECT *
FROM TempA
SELECT *
FROM TempB

ROLLBACK