Comparing execution plans with SSMS

In SQL Server 2016 (now available as CTP 3.0), a new feature is shipped: Execution Plan Comparison Tool. This new (and very cool) feature allows you to compare 2 execution plans within SQL Server Management Studio (SSMS). And according to Amit Banerjee (Blog | @banerjeeamit), this feature is also available in the “SSMS – September 2015” release (more info here). Let’s take a look at this new feature.

Creating resources
To generate an execution plan that we can compare later on, let’s create a table with some data first:

USE Sandbox


GO 10

To generate some execution plans, run the queries below with the “actual execution plan” on, and safe these plans to disk:




I’ve also made the 2 plans that were generated available for download here and here.

Comparing execution plans
To compare execution plans in, you need to open the first execution plan from SSMS. In the plan-window, right-click and click on “Compare Showplan”:

Now a pop-up window will open, that asks you what plan you want to use to compare it with. Now open “Test2.sqlplan”. The compare window opens, and you can compare plans:

As you can see, parts of the plan that are the same in both execution plans, are colored the same. These colors are randomly chosen, and can be different every time you compare 2 plans. This makes it easy to determine where both plans are equal or differ.

If you click on one of the highlighted parts in the execution plan, the other plan will center its view on that part of the plan (unfortunately it’s a bit hard to see that with these small execution plans). If you click on a highlighted part, you can see that there is a blue rectangle drawn around the object:

Another really cool thing is the properties windows. If you don’t have that open by default, right-click on the first object in the execution plan (the SELECT part), and click on “Properties”. This shows 2 property-windows, in which you can compare the memory grant for both plans for example:

At this moment, you need to save both execution plans to disk in order to compare them. If you try to compare an execution plan of a query you just ran without saving it, it throws an exception:

But in my opinion this is just a minor issue, and I think this will be fixed in one of the next releases.

Comparing execution plans is definitely something I’ve been missing for years. Especially when you have 2 really big plans, this can really help you speed up the analysis and debug process.

Even though I’m pretty used to using SQL Sentry Plan Explorer to open execution plans, that doesn’t give me the option to compare plans (yet). At least not in the free version of the tool, that I always recommend to colleagues and friends to use.

This is another one of the really cool features in the overhauled SSMS, and I think the SQL Server team is rocking this new release.

Tracking query progress with Live Query Statistics

How frustrating is it, to run a query on a database and it seems to be “stuck”. We’ve all seen that happen right? But how can you tell what the problem is, without letting the query complete (which could take a long time)? Microsoft (or actually the SQL Server team) gave us a new toy to play with that can help us in situations like this, and this tool is called “Live Query Statistics”.

Creating resources
To show you how the Live Query Statistics work, let’s create a sample table first, and insert 1.000 rows:


INSERT INTO LiveQueryStats
GO 1000

Because we want to actually see something happen, let’s multiply the number of rows coming from the single table by using the query below:

FROM LiveQueryStats T1
CROSS APPLY LiveQueryStats T2
CROSS APPLY LiveQueryStats T3

The query will return 1 billion rows (, and will run for a while. This gives you the opportunity to look at the different features without rushing or losing your running query.

Live execution plan
Before running the query above, you need to enable the Live Query Statistics just like you would do to the normal execution plan. This is an extra button added in the SQL Server 2016 SSMS (and the downloadable version of course). When that’s enabled, you can execute the query, and SQL Server Management Studio (SSMS) will automatically switch to the live execution plan.

In this execution plan, you’ll see the data flow through the components, so you can actually see what SQL Server is doing at a specific moment:

Query completion percentage
Another great addition is the overall completed percentage, that you can find at the bottom of your SSMS. This shows you the percentage of completion, that can help you estimate the time till completion:

Live rowcount
The properties window normally shows you the in-depth information of your query after completion (number of threads, memory grant, etc). With the live execution plan enabled, it can provide you with real-time statistics as well:

Live execution plan from Activity Monitor
From the Activity Monitor you can also open the live execution plan of running queries on your instance. In the Activity Monitor, you have an additional tab in SQL Server 2016 called “Active Expensive Queries”. In this tab you can right-click on a running query, and click on “Show Live Execution Plan”. This opens a new tab in SSMS with the execution plan:

Unfortunately this only works for queries that have the live statistics enabled before execution. I don’t expect this to change in the final product, mainly because of the negative performance impact this feature can have on your queries and instance.

Drawbacks, Limitations and Bugs
Because this is only a CTP version of SQL Server 2016, we can expect some bugs and limitation, so I’ll be the last one to judge. And I don’t think these limitations are a big drawback on the feature.

One of these bugs is a crashing SSMS. If you open a live execution plan from the Activity Monitor, and close that tab, SSMS crashes every now and then.

Another weird thing is that exiting SSMS by clicking on the close button (top right) when the live execution plan is open, causes the list of recently used SQL Servers (the list used when connecting object explorer for example) to be cleared for some reason.

This behavior is reproducible, so I think this is a small bug in SSMS, or the fact that I upgraded this instance from the first CTP version till the current version. But I’m sure this will be fixed by the SQL Server Team in the next releases.

The biggest pitfall (in my opinion) of this feature is also mentioned in the documentation. This feature is “primarily intended for troubleshooting purposes” and “can slow the overall query performance”. And I know, it look SO COOL to have this on all queries you’re running, but please be careful with this. Don’t enable this on every running query, but only use this to debug issues!

Another limitation for this feature (at least at the moment I’m writing this), is the use in combination with columnstore indexes, memory optimized tables and natively compiled stored procedures. You can read more about this here.

And I shouldn’t even have to mention this, but remember: you can only use Live Execution Plans when you have SHOWPLAN permissions on the database (same permissions you need to view normal execution plans).

One of the questions I had when I read about this: are there any alternative for this? One of the only things that come to mind is the “Track My Query” tool, written by Matan Yungman (Blog | @MatanYungman). This tool allows you to monitor your query, and it will show you what part of your query is currently running. For more in-depth information, I recommend the SQLBits session where he explains the inner-workings of this tool. I’ve seen this session in person, and it was an interesting session.

With all these awesome new features that will be shipped in SQL Server 2016, I think there’s a whole new way of looking at SQL Server as a product. I’ve always liked working with SQL Server, but this version is taking it to the next level.

With all these new features, debugging issues is going to be a bit easier, developing new stuff is going to be faster, and the overall usability is going to skyrocket. I can’t wait to get my hands on the finished product!

Time traveling with SQL Server 2016: Temporal tables

Since the release of the first publicly available SQL Server 2016 CTP2, we have a whole list of new and cool features we can start using. One of those features is called temporal tables (or system-versioned tables). So what are temporal tables, and how can you use them?

What is a temporal table?
To understand the functionality of temporal tables, we first need to agree on the definition of a table in SQL Server. If you run a query on a table that contains data, that query can only return the current version of a record. This means you can only see the current “truth”, and there’s no way to travel back in time to see older versions of this record.

Temporal tables give you the opportunity to time travel in SQL Server. For every data change (Update, Delete and Merge) in your table, a historical copy is stored in a history table. The table on which you enabled this data tracking is changed to a “System-Versioned” table.

How does it work?
For every update or delete on a table where system_versioning is enabled, a historical copy of the record is stored:

The original image can be found in this blog post, written by Manoj Pandey (Blog | @manub22).

But how does that work? Let’s start by creating a table we will use to test system-versioned tables:

CREATE TABLE dbo.BankAccount
	(AccountNumber INT CONSTRAINT PK_BankAccount_TransactionID PRIMARY KEY,
	 AccountBalance FLOAT);

In order to make a table a system-versioned table, we need to add 2 datetime2 columns to our newly created table. SQL Server will use these columns to store a from- and to-date that will indicate the valid period of time of the record. By telling SQL Server to use these 2 columns with the keywords “PERIOD FOR SYSTEM_TIME”, SQL Server knows that it needs to use these columns when querying data from the history table:

ALTER TABLE dbo.BankAccount


In the above script, the “HIDDEN” keyword is used. This can only be used from CTP 2.1 and higher. What HIDDEN does, is hide the columns in your original table to make sure application- and database code isn’t going to fail because you added these 2 new columns. You can still query the columns, but if you run a “SELECT *” query these columns won’t be returned.

When that’s done we can enable system-versioning on the table:

ALTER TABLE dbo.BankAccount

Data changes & history
Now we can take a look at how data changes are captured, and history is recorded. Before you execute any of the statements below, enable the execution plan in your SSMS. If we insert a new record, no history is created:

INSERT INTO dbo.BankAccount
	(AccountNumber, AccountBalance)
	(2147483647, 10000)

We just see the expected Clustered Index Insert, nothing special:

Now, let’s update the bank account balance (the WAITFOR is added just to make sure we have a clear difference in change time for the next few steps):

UPDATE dbo.BankAccount
SET AccountBalance = AccountBalance - 2500

WAITFOR DELAY '00:01:00'

UPDATE dbo.BankAccount
SET AccountBalance = AccountBalance + 1000

If you look at your execution plan, you’ll see an extra DML command in the execution plan:

Besides the expected Clustered Index Update, there is an extra Clustered Index Insert. This is the insert statement that is generated by SQL Server, to insert a record in the history table. There’s no user interaction needed to store historical data. SQL Server will take care of that process for you.

Time travelling
Now that we store historical data, it’s time to start the actual time travelling. To paint a complete picture, this is a timeline of the data inserts and updates:

Now that we have a clear timeline, let’s start our time travel. Besides creating and maintaining the history table, you also get to use a bit of new syntax in your query. For system-versioned tables, you can use the “FOR SYSTEM_TIME” clause in your query. In order to show you the timeline in SQL Server, let’s use this new syntax to query our history per minute:

SELECT * FROM dbo.BankAccount FOR SYSTEM_TIME AS OF '2015-07-09 09:42:00' --No data available
SELECT * FROM dbo.BankAccount FOR SYSTEM_TIME AS OF '2015-07-09 09:43:00' --First insert
SELECT * FROM dbo.BankAccount FOR SYSTEM_TIME AS OF '2015-07-09 09:44:00' --First update
SELECT * FROM dbo.BankAccount FOR SYSTEM_TIME AS OF '2015-07-09 09:45:00' --Second update
SELECT * FROM dbo.BankAccount FOR SYSTEM_TIME AS OF '2015-07-09 09:46:00' --No other changes

Remember, this is based on my data. So if you want to query your timeline, change the datetime string based on your data!

So basically, you can start querying your data by using a specific point in time:

SET @PointInHistory = '2015-07-09 09:45:00'

SELECT * FROM dbo.BankAccount FOR SYSTEM_TIME AS OF @PointInHistory

Besides the actual point in time, you can use 3 other ways to query your data:

SELECT AccountNumber, AccountBalance, ValidFrom, ValidTo
FROM dbo.BankAccount
FOR SYSTEM_TIME FROM '2015-07-09 09:43:00' TO '2015-07-09 09:45:00'

SELECT AccountNumber, AccountBalance, ValidFrom, ValidTo
FROM dbo.BankAccount
FOR SYSTEM_TIME BETWEEN '2015-07-09 09:43:00' AND '2015-07-09 09:45:00'

SELECT AccountNumber, AccountBalance, ValidFrom, ValidTo
FROM dbo.BankAccount
FOR SYSTEM_TIME CONTAINED IN ('2015-07-09 09:43:00', '2015-07-09 09:45:00')

Table changes
In practice you’ll not only see data changes happening on tables, but you’ll also have schema changes on your table once in a while. But how does a temporal table cope with schema changes? Let’s try to add a column to our table:

ALTER TABLE dbo.BankAccount ADD NewColumn VARCHAR(10)

This results in an error:

In order to add a column, we need to disable the system-versioning first, before we add the new column:


ALTER TABLE dbo.BankAccount ADD NewColumn VARCHAR(10)

Now let’s enable system-versioning again:

ALTER TABLE dbo.BankAccount

This results in another error:

So we first need to add the same column to the history table, before enabling system-versioning again:

ALTER TABLE dbo.MyBankAccountHistory
ADD NewColumn VARCHAR(10)

ALTER TABLE dbo.BankAccount


Disabling system-versioning means that for a short period of time, you could loose historical data. A possible way to get around this, is by setting your database in single user mode before you disable it. This prevents other user settings from changing data in that table. If you run this in your production environment, this might not be your best option though! By disabling system-versioning, historical data will not be removed.

Indexing historical data
Performance isn’t only important when you query your “normal” tables, but also if you want to query your newly created temporal tables. If you want to write a business process that uses your historical data, you need to make sure this doesn’t slow down your application. But how do you achieve that?

Just like on a normal table, you can create indexes no the history table. This table is basically a normal table, so there’s no reason why you can’t create indexes on it:

ON dbo.MyBankAccountHistory (AccountNumber, AccountBalance)

Historical data cleanup
It’s really great to store historical versions of your records in a separate table, but this also provides you with new challenges. You need more storage for your data, and you might want to remove data after a certain period of time. At this point in time, system-versioned tables don’t have a cleanup process. This might change in the future, but right now you need to cleanup these tables yourself. So if you want (or need) to cleanup old data, you need to write a small process yourself. This can be as easy as writing a query to run through all system-versioned tabled and delete old data, and schedule that in a SQL Server Agent job. But again, this is something to keep in mind when you start working with temporal tables.

I had a really good conversation about this with Borko Novakovic (@borko_novakovic), who is the Program Manager for this part of SQL Server. The code below is an example of such a process, that Borko provided:


    /*Drop versioning to allow deletion of historical data*/
       ALTER TABLE dbo.BankAccount
       /*DELETE MyBankAccountHistory table*/
       FROM dbo.MyBankAccountHistory
       WHERE ValidTo < '2015.01.01'
       /*Re-establish versioning*/
       ALTER TABLE dbo.BankAccount
                  (HISTORY_TABLE  = dbo.MyBankAccountHistory,
                   DATA_CONSISTENCY_CHECK = OFF));


Deleting data without disabling system-versioning doesn’t work, because you’ll get an error like this:

I do want to thank Borko again for his time. He helped me out a lot, and I’m thankful he took the time to do that. Thanks Borko!!!

Remove system-versioning
Removing system-versioned tables from your systems takes one extra step, because just executing a “DROP TABLE” statement doesn’t work. SQL Server will prevent you from dropping temporal tables, without you first disabling it. Let’s start by disabling system-versioning:


This disables the system-versioning on your original table, and transforms the history table to a normal table. If you want to get rid of the tables, you can just drop them both like you would normally do:

DROP TABLE dbo.BankAccount
DROP TABLE dbo.MyBankAccountHistory

Besides this blog, there are 2 fantastic posts you need to read about this topic, written by Itzik Ben-Gan (Blog | @ItzikBenGan), that you can read at (Part 1, Part 2).

One thing you definitely MUST SEE is a video with Borko and Scott Klein (Blog | @SQLScott) on Channel 9. In this video, Borko explains the idea behind temporal tables, and shares some insights on this new feature.

Other information on temporal tables can be found on MSDN:

Temporal Tables
SQL Server 2016 (MSDN root)

SQL Sentry Plan Explorer: You can’t live without it

Every data professional out there will run into slow running queries, or performance issues you can’t explain at some point. At that moment, it’s difficult to explain the problem without looking at an execution plan. SQL Server Management Studio (SSMS) has build-in functionality to look at these execution plans. But this isn’t always as useful as we would like it to be. But there is a great free tool that’ll help you with query-tuning and pinpointing the issue in bad performing queries.

SQL Sentry Plan Explorer is free, and available on the website of SQL Sentry. Even though it says it’s a trial version, it won’t expire after a certain period. The only thing that’s “trial” in this version, is that some functionality is blocked in the free version. But all the good stuff is available in the free version.

Integration in SSMS
When you start the install, the install doesn’t ask you to shut down SSMS. But I recommend you do. If you don’t close SSMS, you won’t see the SSMS add-in menu. It will show after the setup is finished, and you start a new instance of SSMS.

Creating a query, and opening it in Plan Explorer
As an example, I’ve created a really bad query on the Adventureworks2012 database:

USE AdventureWorks2012

DECLARE @MinPrice INT = -1;

WITH Shipping AS
  PV.ProductID AS ProductID,
  UM.Name AS ShippingPer,
    WHEN UM.Name = 'Each' THEN PV.StandardPrice
    WHEN UM.Name = 'Dozen' THEN PV.StandardPrice / 12
    ELSE @MinPrice
  END AS ShippingCostPerUnit
FROM Purchasing.ProductVendor AS PV
INNER JOIN Production.UnitMeasure AS UM ON UM.UnitMeasureCode = PV.UnitMeasureCode

  dbo.ufnGetProductListPrice(P.ProductID, GETDATE()) AS XYZ,
  P.ListPrice + S.ShippingCostPerUnit AS TotalCostProduct,
  Quantity.TotalQuantity * P.ListPrice AS TotalValueStock,
  ((Quantity.TotalQuantity * P.ListPrice) / Locations.TotalLocations) AS AverageValuePerLocation
FROM Production.Product AS P
INNER JOIN Shipping AS S ON S.ProductID = P.ProductID
  SELECT SUM(Quantity) AS TotalQuantity
  FROM Production.ProductInventory
  WHERE ProductID = P.ProductID
  GROUP BY ProductID
) AS Quantity
  SELECT COUNT(LocationID) AS TotalLocations
  FROM Production.ProductInventory --WITH(INDEX(0))
  WHERE ProductID = P.ProductID
) AS Locations
WHERE P.ListPrice <> 0
ORDER BY P.ProductID, P.ProductNumber, P.Name, TotalLocations ASC

If you run this query in SSMS, and you include the actual execution plan (Ctrl + M), it will show you the execution plan in a separate result window. In this window, you’ll have the option to right-click, and choose “View with SQL Sentry Plan Explorer”:

If you click this, you’ll open Plan Explorer, and it will show you the execution plan:

So, is that all?
I can almost hear you think: So what’s the difference between Plan Explorer and the default SSMS windows, besides the fancy colors? Just take a look at all the extra opportunities you get with Plan Explorer. For example, how does your join diagram look? Can you pull that from SSMS? No? Well I can do that with Plan Explorer:

Your most expensive operation in the query? Yes, you could do that by looking at the percentages shown in your queryplan. But can you show me why they are that expensive? Again, I can do that with Plan Explorer:

Can you do you job without it?
If I ask myself this question, I think I can honestly answer this with: yes. Yes, I can do my job without it. But this makes it SO much easier to pinpoint the problem, and to get a quick overview of the query performance. Normally I look at the queryplan in SSMS first, and then immediately open up a Plan Explorer window, to take a closer look at the problems.

So if you write queries on a daily basis, and you’re responsible for, or interested in, qery performance: download it today, and try it out yourself. I’ll promise you, you won’t regret downloading it!
If you want to read more about SQL Sentry Plan Explorer, don’t forget to check out these blog posts:

Julie Koesmarno: Analysing Execution Plans With SQL Sentry Plan Explorer
Mickey Stuewe: On sabbatical
Chris Yates: SQL Sentry Plan Explorer – Don’t Leave Home Without It

Data paging using offset

With every new release of SQL Server, we get to use new features that make our lives as developers and administrators so much easier. A few days ago, I came across an old piece of code (SQL 2005 if I remember correctly), that I used to page data for a CMS I build when I was still a web developer.

The company I worked for needed a new website, and wanted an HTML-editor to edit content on the website. This content was stored in SQL Server, and was retrieved by the website. With a few business rules we decided which content was visible, and which content was hidden from the website.

One of the features of the website was a news feed. But because there were so much news articles, we needed to show the top 10 articles on the first page and let the user click through to the next page of articles. But because we had so much news in the database, we needed to page this data. Every page should show 10 new articles, and we needed to switch pages for the rest of the news articles.

Creating the sample data
In order to show you the problem and solution, we need to create a sample table:

   Title VARCHAR(50),
   Content VARCHAR(50))

The test data we need to insert looks the same for every record:

INSERT INTO dbo.Paging
  (Title, Content)
  ('This is an article', 'This is the content')
GO 50

This script will insert 50 record in the Paging table.

The old way
In older versions of SQL Server you needed to build your own solution to solve this problem. Let’s assume you clicked the news feed button on the website, and we want to switch to page 2 of the results. The solution I build back then looked something like this:

DECLARE @RowsToShow INT = 10,
        @RowsToSkip INT = 10

  ID ,
  Title ,
FROM dbo.Paging
  (SELECT TOP(@RowsToSkip) ID FROM dbo.Paging)

In the query above, you can see we skip the first 10 rows, and retrieve the next 10 rows after that. That means that you need to remember on the website which records were retrieved already, and which records you want to see. The easiest way to do this, is by selecting the ID’s you’ve already shown, and retrieve the next set of rows.

This means you get execution plans like this:

The new way
From SQL Server 2012 onwards, we can use a new feature that is called OFFSET. That feature allows us to “window” our dataset, and retrieve a small subset of data without using a TOP and subquery, like in the example above. The new query would look like this:

  ID ,
  Title ,
FROM dbo.Paging

In the query above, you can see an offset of 10 rows, and a fetch of 10 rows. This means that it skips the first 10 records, and retrieves the next 10 records after that. But how can you get this to work with dynamic resultsets and pages? This is one way to do it:

DECLARE @RowsPerPage INT = 10,
        @PageNumber INT = 2

  ID ,
  Title ,
FROM dbo.Paging
OFFSET ((@PageNumber - 1) * @RowsPerPage) ROWS

The offset is calculated by taking the @PageNumber parameter minus one to retrieve the page we want to see. If we wouldn’t do that, the offset would always skip the first 10 records. Then we multiply that number by the @RowsPerPage parameter, to calculate how many results we want to retrieve.

After that, we use the @RowsPerPage in the FETCH NEXT clause to retrieve the number of rows we want (in this case the next 10). This results in a completely different execution plan:

As you can see, this has a lot less impact on SQL Server. This becomes really visible if you compare both in SSMS:

I/O Costs
Comparing query costs is nice when you quickly compare 2 different approaches for the same solution, but in the end it all comes down to I/O costs. So which query is the fastest solution, and how are we going to test that?

First of all, we need to measure the I/O calls. We do that by using a DBCC command called DROPCLEANBUFFERS. This allows us “to test queries with a cold buffer cache without shutting down and restarting the server”. SO DON’T USE THIS IN PRODUCTION!!!

So the complete testscript looks like this:



DECLARE @RowsToShow INT = 10,
        @RowsToSkip INT = 10

  ID ,
  Title ,
FROM dbo.Paging
  (SELECT TOP(@RowsToSkip) ID FROM dbo.Paging)


DECLARE @RowsPerPage INT = 10,
        @PageNumber INT = 2

  ID ,
  Title ,
FROM dbo.Paging
OFFSET ((@PageNumber - 1) * @RowsPerPage) ROWS

So we clean the SQL Server buffers, run the first query, clean the buffers again, and run the second query. Now the effect of the new statement is really obvious, if you look at I/O costs:

So the old version of the query (with the sub-select) scans the table twice, and reads 51 pages from the cache. The new approach (with the OFFSET) scans the table only once, and reads only 1 page from the cache.

The less I/O calls SQL Server needs to retrieve the result from disk or cache, the faster your query will run. In this case, we’ve tuned the query from 51 pages read to 1 page read. And we’ve only tested this on a table with 50 records, but the network traffic is decreased significantly. So here’s a lot of performance improvement from only one new piece of functionality. And there is a lot more out there.

T-SQL Tuesday #43 – Hello, Operator?

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Hello, Operator?”. If you want to read the opening post, please click the image below to go to the party-starter: Rob Farley (Blog | @rob_farley).

Execution Plan
Everybody that ever got serious with SQL Server, knows about execution plans. And like Rob said in his opening post, if you don’t, start looking into that! But if you start working with execution plans, it gets confusing really fast. All those weird “building blocks” that tell you what SQL Server did with your query?! It almost looks like magic…!

But when you dive into it, they become easier to read and they suddenly are somewhat understandable. But hey, even after all those years looking at execution plans, some things still amaze me.

Last week I was asked to look at a query, that was used in an SSIS package. This package was one of the slowest from the nightly ETL processes. So I started of by looking at the execution plan:

Start pinpointing the first issue
Then I started digging, and I noticed this:

	ON m.Col1 = ISNULL(LOWER(t.Col1) COLLATE Latin1_General_CS_AS, '(N/A)')
	ON s.Col2 = ISNULL(LOWER(t.Col2) COLLATE Latin1_General_CS_AS, '(N/A)')
	ON k.Col3 = ISNULL(LOWER(t.Col3) COLLATE Latin1_General_CS_AS, '(N/A)')

But wait a minute… The collations we use aren’t Case Sensitive, so why use the LOWER() function anyway? And the collations are practically the same for the 2 databases used in the query (“SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS”). But the column collations are both the same (“SQL_Latin1_General_CP1_CI_AS”). SQL Server doesn’t throw an error if I join both sets, and the results aren’t different if I use COLLATE or not. So we learned another thing: the column collation is used in the ON clause of the JOIN. So both functions aren’t necessary because the collations are equal, and thus their wasting our precious CPU cycles…

Now here comes the moment you’ve all been waiting for. How does the execution plan differ? This is what the new execution plan looks like:

So what are we missing? You guessed right! We’re missing this bad boy:

So what does the “Compute Scalar” actually do? According to MSDN:


The Compute Scalar operator evaluates an expression to produce a computed scalar value

So that means it calculates a new value for every input column. This calculation can be as simple as a conversion of the value, or a concatenation of multiple values. Most of the time these “Compute Scalar” aren’t a major cost in your execution plan, but it can become an issue that uses a lot of CPU.

But what does this “Compute Scalar” do in this query? If I remove the ISNULL() from the query, it’s still there. It disappears if I remove the COLLATE. So in this case, the “Compute Scalar” is “calculating” a new value for every record. This “calculation” actually is a conversion from 1 collation to the other.

But what’s the difference?
If we compare both versions of the query, the modified version has a lot less impact on the server:

And this is only without the LOWER() and COLLATE functions. So with a little bit more effort, you can rewrite a part of the query, maybe add an index, etc. So as you can see, small changes can have a huge impact!

A “Compute Scalar” isn’t that hard to understand, but you’ll find it very often in your execution plans. So even though it isn’t the worst performance blocker in the world, it’s an interesting one to get familiar with.

If you don’t use SQL Sentry Plan Explorer yet, download your copy now!

Huge operator costs in execution plan

If you work with SQL Server, you’ll need to look at execution plans sooner or later. Now and in the past, I’ve had the privilege of introducing many of my (former) colleagues to these “works of magic”. But what happens if you can’t trust the plans you’re looking at…?

Say what…?
Last week I was asked to look at a slow running query. The first thing I did was look at the execution plan. It hit me pretty fast that this might be a “less optimized” query:

As you can see, it was a query with a lot of CTE’s and sub-selects, that was build by multiple developers and analysts. No one took the time to review or rewrite parts of the query, but they all build their additions on top of the old version. This isn’t uncommon in most companies, because time is precious and costs a company money. And people often find it difficult to ask or make time for quality control.

But looking a little bit closer, I started noticing that the operators in the execution plan were a little bit too high if you ask me:

This couldn’t be correct! So I asked the help of a life saver, called SQL Sentry Plan Explorer. If you don’t already have it, and are using it, start doing that now! And no, they don’t pay me to say this (but if they want to, I have nothing against that…). The main reason I use Plan Explorer, is that it shows you a little bit more information, and the layout is better then the default execution plans from SQL Server. But what does Plan Explorer show us, if we load the same plan?

It seems that the Plan Explorer shows the right numbers. But how is this possible? After some online searching, I came to the conclusion that I’m not the only one having this issue:

Huge operator cost in estimated execution plan
Query plan iterator cost percentage way off
SSMS execution plan sometimes exceeds 100
Katmai also 2005 graphical plan operator costs exceed 100

But unfortunately, all of these issues are marked for “future release”, and only 1 is from last year. The other connect items are much older. So maybe they will fix it for the next release that is just announced.

But keep in mind, even though the numbers look weird, it doesn’t affect performance.


Get every new post delivered to your Inbox.

Join 70 other followers