Deadlock Detector: Drop it like it’s hot!

Last time I talked about the right tool for the right job. After that post, I noticed that during the nightly backups and job-runs, the SQL Server log filled up with deadlocks. Hoping to find the cause of this issue, I installed SQL Solutions Deadlock Detector. But I must warn you: it’s easier to install it, then to get rid of it!

Downloading it
The drama starts with downloading the tool. If you go to the download location of the tool, there’s no mention of a trial-version. They do like to tell you that it’s free… A lot of times… But guess what… It isn’t!

Installing it
After downloading the tool, I started the installation. Again, there’s no mention of any kind that I’ve just downloaded a trial version, and that you need to pay for the tool after 7 days.

Use it
After the installation I started the tool. The tool shows you for the first time that you’re actually running a triall:

After I clicked on “Try”, I was prompted to enter a SQL instance to run the tool on. After giving it my SQL Server instance location and name, it showed a pop-up really quick. Apparently it installed a bunch of crap on my SQL instance! Again, there is NO mention that it needs to install objects in your msdb! Also, SQL Server Agent needs to run on your instance, because Deadlock Detector can’t run without that.

So what does Deadlock Detector install? I’ve created a screenshot of all the objects that are installed:

Uninstalling it
Okay, the tool wasn’t what I was looking for, so I wanted to uninstall it. But how do I get rid of all those objects that were created in my msdb database?

First I uninstalled the tool, hoping that it would also drop all the objects it created. But at the end of the uninstall, it sent me to this webpage. So I’m suppost to download a special uninstaller to get rid of your installer?

Okay, so I downloaded the special installer-uninstaller, and ran it. Close, but no cigar… The tool ran, returned no errors, but the objects still existed. Running it again, ended with the same results.

To check it yourself, run this query:

FROM msdb.LakeSideLockLogger._LakeSide_DbTools_LockLog

If it’s still running you’ll see record appear that tell you that there were deadlocks on your system. Even though you closed the tool and, like me, expect it to shutdown, the services still run.

To check if the objects still exist on your server, run this query:

FROM msdb.sys.objects
WHERE 1 = 1
AND is_ms_shipped = 0
AND Name LIKE '_LakeSide%'

So what do you do then? Right, I sent the company a support call. I’ve waited more than a week by now, but still no reply. Not even a reply that they’ve received my mail. So a really, really bad service of SQL Solutions if you ask me…

So after a lot of searching, I saw a really small reference to another uninstall method: from the tool itself. So I’ve re-installed the tool, and tried that:

And that worked for me!

Apparently more people had issues with uninstalling the tool, looking at results like this.

The thing I learned from this, is to dig a little bit deeper if I look at a new tool. The website of the tool’s manufacturer might not be the best source to find out how a tools functions…

In the mean while, I found some more objects that aren’t uninstalled by Deadlock Detector: Operators and Alerts. Even though you’ve “uninstalled” the tool, objects, queues, etc, there are still some leftovers: 1 operator, and 3 alerts.

You can drop them by running the script below:

--Drop Operator
EXEC msdb.dbo.sp_delete_operator @name=N'Deadlock Detector – Default operator'

--Drop Alerts
EXEC msdb.dbo.sp_delete_alert @name=N'Deadlock Detector – Process is killed'

EXEC msdb.dbo.sp_delete_alert @name=N'Deadlock Detector – Lock is detected'

EXEC msdb.dbo.sp_delete_alert @name=N'Deadlock Detector – Deadlock is detected'

Are nested Inserts possible?

Two weeks ago I got an interesting questions. Two of my colleagues thought of a funny way to tease me, by asking me this question:

If we have an insert into table X, can we nest another insert into the query to fill table Y with the identity generated by insert X?

After thinking about this a few minutes, I told them I thought this was possible. Eventually I found a few spare minutes, and came up with this solution. In all honesty, this isn’t completely my solution. A few forums I came across mentioned this option, and I took those ideas and created my own solution.

So first, let’s create the 2 tables we need for this example:

	 String VARCHAR(100))

	 String VARCHAR(100),
	 Action VARCHAR(100))

So I’ve created T1 (table X in the question), and T2 (table Y). Let’s assume that T1 contains your product data, and for every product you insert, you want to insert a log-record into T2.

This can’t be done with actual nested inserts. If you want to do this, the easiest ways is to create a Stored Procedure (SP) that uses SCOPE_IDENTITY to retrieve the generated ID, and insert that into your second table. But because an SP isn’t always the best ways to do this, I wanted to see if I could transform it into a single query.

Since SQL Server 2008 we have the opportunity to use the MERGE statement. What the MERGE does is synchronize two tables by inserting, updating and deleting rows in the destination table, based on a set of rules and/or joins.

So, how would this statement look like? After some trial and error, I created this statement:

USING (SELECT -1 AS ID, 'This is a string' AS String) Q
	ON Q.ID = T1.ID
	THEN INSERT (String) VALUES ('This is a string...')
	inserted.ID, inserted.String, CONCAT('<<<<< Inserted in T1 @ ', CONVERT(DATE, GETDATE()) ,' >>>>>')

As you can see the destination table is T1. This is the first table the record is inserted into, and the table that generates the identity. Because we only want to insert the record, and not update or delete anything, I only created a “WHEN NOT MATCHED BY TARGET” clause.

Because the ID columns don’t match, the record gets inserted into T1. After that, I use the OUTPUT clause of the merge statement to insert the same record (but with the generated identity) into T2. As a reference, I also insert the action-description that contains a date.

So as you can see, you can use nested inserts in SQL Server, only via another statement. But remember, this is just written to prove it’s possible in a single statement, and not for production usage. So if you decide to use this in production or on your system, consider your other options first!

Code samples:
Are nested Inserts possible.sql

T-SQL Toolbelt – Search for objects in databases – V 2.1.0

A few weeks ago, I received a message from an old colleague and friend Eric (Blog | @saidin). He wanted to know if I had a query in my (and I quote) “magic bag of SQL tricks”, to search through object in SQL server. The company he works for (he is a software developer, and independant consultant) wanted to change all stored procedures, that contained functionality to calculate VAT (Value Added Tax).

I remembered that a few years ago, I needed that same functionality, and I wrote a few scripts to search for specific dependencies in views and stored procedures. Next to a query that gets information from
sys.tables and sys.columns, I used these queries to get view and stored procedure content:

FROM sys.syscomments
WHERE text LIKE '%&lt;SearchTerm&gt;%'


The first query uses information from sys.syscomments. Which, according to MSDN:

“Contains entries for each view, rule, default, trigger, CHECK constraint, DEFAULT constraint, and stored procedure within the database. The text column contains the original SQL definition statements.

The seconds query uses INFORMATON_SCHEMA, that contains SQL Server metadata (see MSDN article):

An information schema view is one of several methods SQL Server provides for obtaining metadata.

The VIEWS view (a view on all views?) returns a row for each view that can be accessed by the current user, in the current database. So this means that the view only returns rows for objects that you have permissions on.

Then I decided to write a script that does this in one query, and more… When working on this script, I thought about adding more functionality to it. Why wouldn’t you want to search for primary or foreign key columns, triggers, functions, etc? But adding more information to the resultset often means that the overview is lost along the way. Because of that I created a switch system. By setting a few bits you can turn on what you need to see, and turn off what you don’t want to see. This way the result is kept clean, and you’re not bothered with unnecessary information.

One of the issues I ran into is how to search for a specific string. Because I wanted to let the user enter the searchterm once, I needed to use a variable. But if you use a variable, and you ad a wildcard (%) as the first and last character, the query returns all objects. It has the same effect as returning all objects, instead of returning objects based on a specific searchterm.

So because of this, I used dynamic SQL to search through the list of objects. In dynamic SQL it’s possible to work with wildcards in a like statement. The only thing I needed to do is change one more table from memory to physical temp table, because it’s used in the dynamic SQL. Apparently dynamic SQL can’t use a memory table (DECLARE @Object TABLE) as a datasource.

So this is what I could build in the past few weeks. The only problem is that fixing issues that I found resulted in adding more and more new functionality to the script. With that in mind, I want to create a toolbelt with useful SQL scripts for myself. But of course, I want to share it with the community, so they can use it if they like.

So the upcoming weeks, I hope to build as much functionality in this script as I can. There are still a few wishes for the future, and a few features that I want to build in, just because they’re cool! For every new version, I will write a blog with releasenotes, so you’re aware of the changes in the script.

For further questions and wishes, please contact me via twitter or this blog. I’d love to hear your ideas and wishes, so that I can implement it in the script!

You can download the script by clicking on the image below.


Version 2.1.0:

Extended properties of database objects

When you work with SQL Server, you regularly want to know when an object is last executed, last modified, etc.. Well, at least I do! In most cases you need to use the sys.objects to obtain this information, join it with other sys tables, search for your information, and so on. But there is an easier way.

The script below will get you the latest information of any object in your database.

USE SandBox

    DB_NAME(Txt.dbid)                          AS DatabaseName,
    OBJECT_NAME(Txt.objectid, Txt.dbid)        AS ProcedureName,
    Objects.create_date                        AS CreationDate,
    Objects.modify_date                        AS ModifiedDate,
    MAX(Stats.last_execution_time)             AS Last_Execution,
    CASE Objects.type          WHEN 'AF' THEN 'Aggregate function (CLR)'
        WHEN 'C' THEN 'CHECK constraint'
        WHEN 'D' THEN 'Default or DEFAULT constraint'
        WHEN 'F' THEN 'FOREIGN KEY constraint'
        WHEN 'L' THEN 'Log'
        WHEN 'FN' THEN 'Scalar function'
        WHEN 'FS' THEN 'Assembly (CLR) scalar-function'
        WHEN 'FT' THEN 'Assembly (CLR) table-valued function'
        WHEN 'IF' THEN 'In-lined table-function'
        WHEN 'IT' THEN 'Internal table'
        WHEN 'P' THEN 'Stored procedure'
        WHEN 'PC' THEN 'Assembly (CLR) stored-procedure'
        WHEN 'PK' THEN 'PRIMARY KEY constraint (type is K)'
        WHEN 'RF' THEN 'Replication filter stored procedure'
        WHEN 'S' THEN 'System table'
        WHEN 'SN' THEN 'Synonym'
        WHEN 'SQ' THEN 'Service queue'
        WHEN 'TA' THEN 'Assembly (CLR) DML trigger'
        WHEN 'TF' THEN 'Table function'
        WHEN 'TR' THEN 'SQL DML Trigger'
        WHEN 'TT' THEN 'Table type'
        WHEN 'U' THEN 'User table'
        WHEN 'UQ' THEN 'UNIQUE constraint (type is K)'
        WHEN 'V' THEN 'View'
        WHEN 'X' THEN 'Extended stored procedure'
        ELSE '-'
    END AS ObjectType
FROM sys.dm_exec_query_stats Stats
CROSS APPLY sys.dm_exec_sql_text(Stats.sql_handle) Txt
LEFT JOIN sys.objects Objects
    ON Objects.object_id = Txt.objectid
WHERE 1 = 1
AND Txt.dbid = db_id()                  --Execute for selected DB
AND Txt.dbid IS NOT NULL                --Exclude Ad-Hoc queries
AND DB_NAME(Txt.dbid) IS NOT NULL       --Exclude Ad-Hoc queries
    OBJECT_NAME(Txt.objectid, Txt.dbid),
ORDER BY DatabaseName, ProcedureName ASC 

This script will show you the information you need to decide if you want to keep the object or, for example, want to delete it. With this script you can also check if Functions are still in use, or when a Trigger is last fired.

If you need more information about the object you can also use the query below. This will show you the time needed to execute the object, last execute time needed, etc:

SELECT * FROM sys.dm_exec_query_stats Stats
CROSS APPLY sys.dm_exec_sql_text(Stats.sql_handle) Txt 

With these two queries you will be able to determine the status of the objects in your database.

SQL Server Temp Tables

In SQL Server we know 3 kinds of temporary tables: Local Temp Tables, Global Temp Tables and Table Variables. In this blog I will try to explain the differences, and tell you how to use it.
Local Temp Tables
The local temp table is the most commonly used temp table. This temp tables is created with the following statement:

      Description VARCHAR(10) NULL)

The table that the script above created is a temporary table that is stored on disk. To be exact, it’s stored in the TempDB. The table can only be reached from within the same scope. It will be cleaned automatically, but it’s more wise to clean it up yourself by using a DROP TABLE statement.

A session specific identifier is added to the name, to make sure that temp tables from other sessions don’t use the same name. If you query the sys.tables, you will see a table name similar to this:



The benefits of using a temp table are (amongst others): reduction of required locks (because the user is the only one who has access), less logging necessary, easy debugging (in some cases), etc. One downside is that the objectname is limited to 116 characters.
Global Temp Tables

Global temporary tables work just like local temporary tables (stored in TempDB, less locking necessary). However, they are visible to all sessions, until the session that created the object goes out of scope and all referring sessions to the object go out of scope. Also, the name of the object will remain the same as you declared it. So if you query sys.tables in the database, you will notice that (instead of the script above) the name is still TempTable.

A global temp table can be created with the following statement:

      Description VARCHAR(10) NULL) 

Most of the time you will not encounter these Global temp tables “in the wild”, because permanent tables are mostly preferred.

Table Variables

A temporary table can also be created in memory. This has several advantages: less locking then permanent objects, performs slightly better then temporary- or global tables, not as much logging necessary, etc. Table variables are cleared automatically when the procedure, function or query goes out of scope.

There are a few disadvantages and/or demands for using table variables:
    •They are allowed in SQL Server 2000 and above, and databases with Compatibility level 80 or higher
    •They cannot be used for “SELECT * INTO”-queries
    •They cannot be changed after declaration. They need to be recreated
    •They cannot be truncated
    •They cannot contain: User Defined Function (UDF), User Defined Type (UDT), Computed Column or Default      Constraint
    •They cannot be dropped. You need to let them go out of scope
    •They cannot be created dynamically (“SELECT * INTO @TempTable”-statement), or used in a dynamic SQL      statement
    •They are not included in statistics by SQL Sever, and you cannot create it manually
    •They don’t use parallelism when you use an “INSERT INTO”-statement
    •They will always have a cardinality of 1, because the table doesn’t exist at compile time. Cardinality refers to the      uniqueness of a column. The lower the cardinality, the more duplicated items in a column
    •They must be referenced by an alias if you join the object in a query. If you don’t the compiler will tell you the object      doesn’t exist

As always, there is no right or wrong answer. In most cases you will use the Table Variable or Local Temporary Table. The right answer in your case will be a judgment call between performance and usability. There are a few rules of thumb you can use to determine which type of object suites your needs:
    •If you have less then 100 rows, generally you want to use a Table Variable
    •If you need an index on your object, use a Temporary Table (Local or Global)

Last update on object?

How many times do you wonder about when an object was last updated? You can find out by using the default sysobjects in SQL Server:

USE Adventureworks

DECLARE @DBID INT = DB_ID('Adventureworks')

SELECT AS TableName,
FROM sys.dm_db_index_usage_stats ddius
JOIN sys.objects o
	ON o.object_id = ddius.object_id
WHERE database_id = @DBID
ORDER BY ddius.last_user_update DESC

This will give you the last_user_update, which is based on indexes. The Insert, Delete or Update are stored in the sys properties of the database. So this only works if there is an index of some sort is used on the object!