T-SQL Tuesday #82 – To the cloud… And beyond!!!

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “To the cloud… And beyond!!!”. If you want to read the opening post, please click the image below to go to the invitation.

As I mentioned in the invitation, Azure allows us to easily host and scale our platform. But it also comes with a whole new set of challenges. Some of the observations I made in the last few months:

Agent is missing in action…
One of the biggest challenges for me when I started migrating stuff to Azure was the missing SQL Server Agent. I never thought about how often I actually used it until I missed it! There’s no more “let me quickly schedule this script for tomorrow morning” anymore.

And although there are multiple solutions to this problem available in Azure (Scheduler, Automation) I picked the easy way out. Because we’re running 2 platforms in parallel (an on-premise private cloud and Azure), I decided to create a VM in the private-cloud platform to run operational tasks like scheduled maintenance for example. This is not a solution I would like to keep for the upcoming years, but it allows us to easily run scheduled tasks, without losing time on figuring out how Azure Automation or Scheduler works.

For more information on this, please read this post.

Contained datastore
A thing that can make migrations to the cloud a bit more difficult, is that Azure SQL databases are basically a contained datastore (you would call it a “contained database” when you run it on-premise). This means that you (by default) can’t connect from one database to the other. This could mean that you need to rewrite your applications or stored procedures, or maybe even redesign your entire database/application/domain model.

This also means that running a stored procedure from the Ola Hallengren’s maintenance solution can only be done on the specific database, and not from the master database like the on-premise version does. These small challenges can be overcome, but it does mean code-duplication in your databases because the maintenance procedures need to be deployed to every single database.

Running database maintenance
Just like on-premise databases, Azure SQL databases also need to be serviced every now and then. To make sure I’m not reinventing the wheel I’ve re-used a solution that has proved its worth in practice: Ola Hallengren’s “SQL Server Maintenance Solution”.

But because every Azure SQL database is a contained datastore, I’ve made some minor changes to his solution. You can read more about that here.

Performance issues & alerting
One of the biggest differences for me between an on-premise database and an Azure SQL database is how I need to determine the cause of performance problems and solving them. When you’re working on a on-premise database it’s easy to just look at the CPU and RAM used by your instance, and watch the disk IO in the performance monitor in Windows. But you don’t have those in Azure of course. There you need to work with “DTU” percentages, and “Data IO” and “Log IO” counters. But DTU (Database Transaction Unit) makes monitoring performance a bit too abstract if you ask me. Just look at the explanation Microsoft gives us regarding DTU:


The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions.

For example, a Premium P11 database with 1750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs.

One thing I’m really happy with is that Adam Machanic published a new version of his sp_WhoIsActive for Azure a few months ago. This gives me the opportunity to quickly look at the results to find a cause of the performance issues.

When after a while you have a gut feeling about what the performance of your databases, you run into a new challenge with monitoring: the monitoring of DTU usage has a maximum retention of just 1 hour. This gives me another challenge, because I would like to see a longer retention. I definitely don’t want to over-scale my databases, so in some cases a long running process can trigger this alert. And when this happens during the day it’s annoying, but still okay. But it also woke me up in the middle of the night during an on-call rotation. This forced me to set some of the alerts on the maximum value of 45 minutes / 1 hour, and just disable others. In my opinion, this could use some work by Microsoft.

So in general I’m REALLY satisfied with the functionality that Azure SQL database provides us with, even though there’s always something to wish for of course. The move to Azure gives me as a DBA a new set of challenges and (in some cases) demands other/new skills, but it also allows me to do more than just manage a bunch of databases. I’m now also a part-time system administrator, network admin, (data)architect, developer, etc. This is something I personally really like, because it’s exactly that that allows me to broaden my horizon, and use a set of skills I didn’t for a while when working on on-premise databases.

So even though some people were afraid the DBA role would disappear because of the cloud-uprise, I’m convinced it will be here to stay except it will be a role with more skills needed than before, and I think that’s a good thing.

Invitation: T-SQL Tuesday #82 – To the cloud… And beyond!!!

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month I’ll be the T-SQL Tuesday host, and I’m really honored! Thanks for inviting me to be the host Adam!

The topic
When Adam asked me if I wanted to host another T-SQL Tuesday, I immediately knew a topic I wanted to talk about: The cloud, and (if you want to) specifically about Azure SQL database.

Last time we blogged about the cloud was back in december of 2013, when Jorge Segarra hosted this monthly party. Since then, “the cloud” (to use that buzz-word again) has changed a lot, and I think the possibilities are endless nowadays.

The reason I would like to see you all blog about this topic, is that I’m working with Azure SQL databases a lot now since I switched jobs in December. Currently I’m working for a small start-up that has a cloud-first focus. This means the main (if possible) host for our (data)platform is Azure. And although hosting our platform in Azure makes it easier for us to scale parts of that platform, it also gives us new challenges to overcome. And I’m probably not the only one with that experience…

So with that said, I would like to give you the opportunity to blog about the cloud, in the broadest sense that you can imagine. I’m imagining really interesting blogs about migrating to the cloud, missing features in Azure SQL database, how you’re determining the cause of and solving performance problems, etc. Please surprise us all with your view on the cloud.

The rules
– Your post must be published between 00:00:00 UTC and 23:59:59 UTC on Tuesday 13th of September
– Include the T-SQL Tuesday logo in the top of your post, and your post must link back to this one (trackback and comments are moderated, so it might take some time before they’re visible) or tweet about it using the hashtag #TSQL2sDay
– If you like this, check Steve Jones’ (Blog | @way0utwest) blog post that contains the list of topics, and contact Adam Machanic (Blog | @AdamMachanic) if you’d like to host this party yourself

Easy pivot: From key-value pairs to columns

This article was recently published on dev.getroadmap.com:


If there is one thing I pretty much hate doing in T-SQL it’s PIVOT and UNPIVOT. Even though I used it a few times in the last couple of years, it’s an adventure to find out how it works every time. And I know a lot of people struggle with this part of T-SQL, so let’s take a look at a (hopefully) simple example.

Key-Value pairs
The idea of using key-value pairs to store data isn’t new, but with the rapid development of cloud solutions like Azure and the hype around NoSQL databases, using key-value pairs to store data got a big boost. Especially developers (in my experience) love using key-value pair to store their data, because it’s easy for them to consume the data in an application. But it gives the database professional an extra challenge because we’re used to retrieve columns with values instead of a record per value. So how can we turn those key-value pairs into rows?

To show you how to change key-value pairs to columns, let’s create an example based on flights. If one of our customers needs to travel and they take the plane, there is some basic information we need to show them the flight status in the app. So let’s create a table to store that dataset:

CREATE TABLE dbo.Flights
	[Key] VARCHAR(255),
	[Value] VARCHAR(255)

and insert a flight:

INSERT INTO dbo.Flights
	(FlightId, [Key], [Value])
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'FlightId', '3074e015-62b2-4f76-a8b1-463c53cd79c5'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'AirlineIATACode', 'VY'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'FlightNumber', '8336'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureAirportCode', 'RTM'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureAirportName', 'Rotterdam The Hague Airport'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureDateTime', '2016-08-12 12:15'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureTerminal', '1'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'Gate', NULL),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'ArrivalAirportCode', 'BCN'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'ArrivalAirportName', 'Barcelona–El Prat'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'ScheduledArrivalDateTime', '2016-08-12 14:14'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'FlightIsCancelled', '0')

So now we have a dataset that consists of 12 key-value pairs:

Converting to columns
But when you’re running a query you would like to have columns instead of rows. So by using a PIVOT statement, you can convert these rows into columns:

		FROM dbo.Flights
	) AS SourceTable
	MIN([Value])	--Needs to be an aggregate function
	FOR [Key] IN
	) AS PivotOutput

Because the keys are always the same for flights, I can add those Key-names in the PIVOT statement. But because the PIVOT statement needs an aggregate function to retrieve the value, I needed to add the MIN() function but this doesn’t change the output in any way.

Now we have the same dataset, but converted into rows:

So from now on, converting key-value pairs to columns shouldn’t be a problem anymore!

Running maintenance on Azure SQL databases

This article was recently published on dev.getroadmap.com:


To keep your data healthy and topfit, we all know you need to run something called database maintenance on your databases. This prevents things like performance problems or unrecoverable data from happening, and that is sort of the core business of DBA’s. And when we look at how this can be performed on a regular basis, an on-premise instance is a bit easier to maintain than an Azure SQL database. That’s why I want to share our experience with you on that, hoping that it can save you some time when you start implementing this.

Platform setup
In order to get a better understanding of why certain choices were made, I want to share a very basic version of the setup of our environment.

Our platform consists of 2 mayor parts: a private cloud part (that we see as “on-premise”), and an Azure part. Those 2 parts combined make our platform, and this platform hosts all of our services, applications, databases, etc. The development approach we use is everything should be designed with cloud-first in mind, but only if it’s the right tool for the job, and with the exclusion of some data.

The databases we use in Azure are all what Microsoft calls “Azure SQL databases”. This means that there are no virtual machines of any kind are running on our Azure-part of the platform, also known as DBaas (Database as a Service).

When I draw this platform, and only focus on the data part, it will look something like this:

One of the advantages of this setup is that we can leverage the power of both parts of the platform. As you’ll see later on in this blog, there are some things that we (need to) run from the on-premise instances and some things fully on Azure.

Big shoutout to Ola
Before I’m going into detail, I want to give full kudos to Ola Hallengren (Website | @olahallengren). He has spend a lot of his time to build a SQL Server Maintenance Solution that is completely free for everyone to use. And he did such an excellent job a lot companies (also huge companies) use his solution to run maintenance tasks on their databases.

None of the scripts below are written by me, but only small changes are made in order to make things more clear when the solution is deployed to an environment. The original scripts can be downloaded via the download page on Ola’s website.

Backups & Integrity check
Taking backups of your database and making sure there is no corruption in the datafiles is an essential part of the maintenance solution written by Ola. But because Azure SQL databases have a build-in maintenance solution (for example backups: full backups weekly, differentials hourly, and transaction log every 5 minutes, source), we don’t need to worry about that ourselves.

Index maintenance & Update Statistics
Indexes and statistics are the core of your performance-based maintenance processes. These make sure your queries run fast, and should provide you with a stable and predictable performance. This is especially needed on an Azure database, since you can’t monitor it like you would with an on-premise database.

Because Azure SQL databases are run on shared hardware that you can’t monitor yourself, Microsoft provides us with a number of different performance counters that we can use to check the status/health of our databases. The most important counters are CPU usage, Data IO, Log IO and DTU usage (a combination of the previously mentioned counters). The DTU counter is the most abstract (to me at least), because it’s explained by Microsoft as:


The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions.

For example, a Premium P11 database with 1750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs.

And for me, who is was used to monitoring physical hardware, that is a bit of a different approach when digging into performance-related issues. And it’s not that index and statistics maintenance isn’t important when you work on a on-premise database, but it’s a slightly bigger challenge to monitor the direct effects of a index rebuild or statistics update.

But because every Azure SQL database is a contained database, you need to deploy the stored procedures from Ola’s solution to every single database. So to keep it clear for everyone which table and stored procedures belong to the maintenance solution, I’ve changed Ola’s scripts slightly to create all objects in a specific schema named “dba”. So first of all, let’s create the schema:


And then create the used to log all of the maintenance commands and their outcome:

CREATE TABLE dba.CommandLog
	 DatabaseName sysname NULL,
	 SchemaName sysname NULL,
	 ObjectName sysname NULL,
	 ObjectType CHAR(2) NULL,
	 IndexName sysname NULL,
	 StatisticsName sysname NULL,
	 PartitionNumber INT NULL,
	 ExtendedInfo XML NULL,
	 CommandType NVARCHAR(60) NOT NULL,
	 ErrorNumber INT NULL,
	 ErrorMessage NVARCHAR(MAX) NULL)

Now that these are created, you can create the stored procedure that execute the actual index and statistics maintenance. They are too long to post here as code snippet, but you can download the CommandExecute script here, and the IndexOptimze script here.

But because we want to schedule these procedures later on, I decided to create an additional stored procedure in every database, that is deployed alongside the maintenance objects:

CREATE PROCEDURE dba.ExecuteMaintenance

EXECUTE dba.IndexOptimize
 @Databases = '<Insert database name>',
 @FragmentationLow = 'INDEX_REORGANIZE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30,
 @UpdateStatistics = 'ALL',
 @LogToTable = 'Y',
 @Execute = 'Y'


This allows us to run maintenance with specific options on the different databases.

After this was all set up, I needed to come up with a way to run these procedures on a schedule. And as you might know, Azure SQL databases don’t have a SQL Server Agent so that’s were our on-premise platform comes in handy. Just for this I created a new virtual machine in our private cloud, and installed SQL Server on that machine to utilize the SQL Server Agent. This server (operations server) runs all of the scheduled operational jobs (including maintenance and some monitoring jobs) on our Azure environment.

But to run a proces from this operations machine on one of our Azure databases I needed to create a linked server first:

EXEC sp_addlinkedserver 
 @srvproduct=N'Azure SQL Db',

EXEC sp_addlinkedsrvlogin 
 @rmtsrvname = 'ServerName__DatabaseName', 
 @useself = 'FALSE', 
 @rmtuser = '<USERNAME>',
 @rmtpassword = '<PASSWORD>'

And all that’s left now is to create a SQL Server Agent job, that executes the “ExecuteMaintenance” stored procedure on the Azure database:

EXEC [<LinkedServerName>].<DatabaseName>.dba.ExecuteMaintenance

The reason I choose this approach is that I didn’t want to lose any time on figuring out how the Azure Scheduler or Automation works, at the moment I was implementing the maintenance solution.

But if you want to have a good resource on that, please check out Pieter Vanhove (Blog | @Pieter_Vanhove) blog, because he wrote a great blog post about that.

When I look at how this solution is set up, I’m the first one who admits that this isn’t a perfect or ideal solution. But in the end, this gives the rest of the team a clear solution when they need to start or restart the maintenance process: Just log in to the operations server, start the SQL Server Management Studio (SSMS), open the SQL Server Agent jobs, find the job associated with the database they want to run the maintenance on, and that’s it. But for future scalability and maintainability, we might need to implement another solution to do this.

SSRS Reporting automation with .NET

This article was recently published on dev.getroadmap.com in 2 separate posts:
SSRS Reporting automation with .NET
Application Authentication via https using NTLM:


SQL Server Reporting Services (SSRS) is a great way to create an overview or analysis of your data, that you can share with other people as a report. But what if you have a report that you need to share with a large group of people, but they need it with 50 different parameters (like CustomerID for example), and they want to receive it in Excel or PDF? Are you manually going to execute the report with 50 different parameters, export them to the specific file format, and email those files? I don’t think so. Automating this process is easy if you write a small tool for this, and if you use the “Report Server Web Service URL”.

ReportServers vs Reports
Before we’re diving into the .NET code, first let’s see what the difference is between the URL’s “http:// [servername] :80/ReportServer” and “http:// [servername] :80/Reports”. If you navigate to your SSRS server, you’ll be redirected to “http:// [servername] :80/Reports”. This is the default webinterface that you use to open reports, manage subscriptions, etc:

If you go to “http:// [servername] :80/ReportServer”, you’ll end up in the webservice of SSRS. This allows you to open reports, and as a bonus: add parameters to your http request, so you can automatically execute reports from a URL. This is also called the “SSRS Virtual Directory”:

Building a URL
Now that we know that we need to use the webservice, we can start building our URL. First, let’s start with the base-URL. I’ve created a folder in SSRS called “Test”, and a report called “TestReport”. So the base-URL will be: “http:// [servername] :80/ReportServer/Pages/ReportViewer.aspx?%2fTest%2fTestReport”. And because my report has 2 date-parameters (From and To), I need to add these to the URL: “&From=2015-12-01&To=2015-12-08”.

This URL doesn’t run the report yet, until you add the command for that to the URL: “&rs:Command=Render”. So your complete URL will look like: “http:// [servername] :80/ReportServer/Pages/ReportViewer.aspx?%2fTest%2fTestReport&From=2015-12-01&To=2015-12-08&rs:Command=Render”

One thing to keep in mind is that you need to add the dates in the URL in the correct format (yyyy-MM-dd). If you don’t do that, SSRS will throw an exception.

Where to find these URL’s
If you log on to your SSRS server, you can start the “Reporting Services Configuration Manager”. This is the configuration tool for your SSRS instance.

In this tool you can configure both the webinterface URL:

And the virtual directory:

Text parameter in URL
But SSRS can also have text-fields as input for your report. These can also be added to the URL. Just like the parameters above, you just add the parameter name and value to the URL: “http:// [servername] :80/ReportServer/Pages/ReportViewer.aspx?%2fTest%2fTestReport&From=2015-12-01&To=2015-12-08&FreeText=This is a test…&rs:Command=Render”.

After some testing I’ve found out that you can use any character in the text parameter you want to, except for the &-sign. If you use that, SSRS will think it’s a parameter or command and won’t accept the URL. And there’s also the (browser) limitation of the URL length. Testing proves that the limit is 7926-7931 characters. If your URL is below 7926 characters, it works like a charm. If you go above that (between 7926 and 7931) the behavior of SSRS gets buggy, and above 7931 characters SSRS will throw an exception.

Export to file
Exporting your report to file can also be added to the URL. By adding “&rs:Format=EXCEL” to the end of the URL tells SSRS to export your report to Excel: “http:// [servername] :80/ReportServer/Pages/ReportViewer.aspx?%2fTest%2fTestReport&From=2015-12-01&To=2015-12-08&FreeText=This is a test…&rs:Command=Render&rs:Format=EXCEL”.

This output can be used to automatically store this file on disk or email it with a .NET application.

Export formats
There are several export formats in the webinterface of SSRS:

The available output formats depend on the version of SSRS you’re using. In SQL Server 2016 you have all the same export formats as you have in SQL Server 2014, but they added PowerPoint to that list.

Creating the application
To automatically download an exported report, I’ve created a “Windows Forms Application”. In this applications we need to do 3 things:

– Determine variable values
– Build a URL
– Download/Export the report

To determine the variable values, I added 2 “DateTimePickers”to the form, and a “TextBox” for the CustomerID. Other than that, there are 2 buttons: 1 to get the URL (might come in handy for testing), and 1 to export the report in the selected format. There’s also a “TextBox” so that you can configure the drop-folder for the files:

Build URL
In order to build the URL we need 5 pieces:

– The SSRS servername or URL
– The folder of the report (if it’s not in the root)
– The report name
– The parameters needed for executing the report
– The export format

In my case the folder (“Test”) and report name (“SSRSAutomationTestReport”) are known, so I hard-coded them:

string ReportServer = 
    + ReportServerURL
    + "/ReportServer/Pages/ReportViewer.aspx?%2fTest%2fSSRSAutomationTestReport"
    + "&From="
    + DT_From.Value.Date.ToString("yyyy-MM-dd")
    + "&To="
    + DT_To.Value.Date.ToString("yyyy-MM-dd")
    + "&CustomerID="
    + TB_CustomerID.Text
    + "&rs:Command=Render";

if (RB_Excel.Checked)
    ReportServer += "&rs:Format=EXCEL";

if (RB_PDF.Checked)
    ReportServer += "&rs:Format=PDF";

This results in the URL that you can use to export the report to a specific file format (in my case either Excel or PDF).

Download the file
To download the file we need to use the “CredentialCache”, because when you use the SSRS webservice to execute a report, an NTLM challenge takes place. The “CredentialCache” will solve the 2-step authentication for you. After that, you can use “WebClient” to download the file. This will look like this:

var url = new Uri(ReportURL);

string FileExtension = ".pdf";

if (RB_Excel.Checked)
    FileExtension = ".xls";

var location = TB_Dropfolder.Text + "SSRSAutomationTestReport - Customer " + TB_CustomerID.Text + FileExtension;

// When calling for the url a NTLM challenge takes place
// Once this challenge takes place the GetCredentials will automagically be called via de CredentialCache
// This will resolve the 2 step authentication
// Requirement: the uri for the cache must be the Scheme + Host of the domain
var cc = new CredentialCache();
cc.Add(new Uri(string.Format("{0}://{1}", url.Scheme, url.Host)), "NTLM", new NetworkCredential(Username, Password, Domain));

using (var client = new WebClient())
    client.Credentials = cc;
    client.DownloadFile(url, location);

    MessageBox.Show("Report is exported");

Download the resources
To show you how I solved this, I’ve made the resources available for download. You can download the SSRS report here, and the Windows Forms application here.

Please feel free to download them, try them out for yourself, and let me know what you think.

Comparing execution plans with SSMS

In SQL Server 2016 (now available as CTP 3.0), a new feature is shipped: Execution Plan Comparison Tool. This new (and very cool) feature allows you to compare 2 execution plans within SQL Server Management Studio (SSMS). And according to Amit Banerjee (Blog | @banerjeeamit), this feature is also available in the “SSMS – September 2015” release (more info here). Let’s take a look at this new feature.

Creating resources
To generate an execution plan that we can compare later on, let’s create a table with some data first:

USE Sandbox


GO 10

To generate some execution plans, run the queries below with the “actual execution plan” on, and safe these plans to disk:




I’ve also made the 2 plans that were generated available for download here and here.

Comparing execution plans
To compare execution plans in, you need to open the first execution plan from SSMS. In the plan-window, right-click and click on “Compare Showplan”:

Now a pop-up window will open, that asks you what plan you want to use to compare it with. Now open “Test2.sqlplan”. The compare window opens, and you can compare plans:

As you can see, parts of the plan that are the same in both execution plans, are colored the same. These colors are randomly chosen, and can be different every time you compare 2 plans. This makes it easy to determine where both plans are equal or differ.

If you click on one of the highlighted parts in the execution plan, the other plan will center its view on that part of the plan (unfortunately it’s a bit hard to see that with these small execution plans). If you click on a highlighted part, you can see that there is a blue rectangle drawn around the object:

Another really cool thing is the properties windows. If you don’t have that open by default, right-click on the first object in the execution plan (the SELECT part), and click on “Properties”. This shows 2 property-windows, in which you can compare the memory grant for both plans for example:

At this moment, you need to save both execution plans to disk in order to compare them. If you try to compare an execution plan of a query you just ran without saving it, it throws an exception:

But in my opinion this is just a minor issue, and I think this will be fixed in one of the next releases.

Comparing execution plans is definitely something I’ve been missing for years. Especially when you have 2 really big plans, this can really help you speed up the analysis and debug process.

Even though I’m pretty used to using SQL Sentry Plan Explorer to open execution plans, that doesn’t give me the option to compare plans (yet). At least not in the free version of the tool, that I always recommend to colleagues and friends to use.

This is another one of the really cool features in the overhauled SSMS, and I think the SQL Server team is rocking this new release.

Tracking query progress with Live Query Statistics

How frustrating is it, to run a query on a database and it seems to be “stuck”. We’ve all seen that happen right? But how can you tell what the problem is, without letting the query complete (which could take a long time)? Microsoft (or actually the SQL Server team) gave us a new toy to play with that can help us in situations like this, and this tool is called “Live Query Statistics”.

Creating resources
To show you how the Live Query Statistics work, let’s create a sample table first, and insert 1.000 rows:


INSERT INTO LiveQueryStats
GO 1000

Because we want to actually see something happen, let’s multiply the number of rows coming from the single table by using the query below:

FROM LiveQueryStats T1
CROSS APPLY LiveQueryStats T2
CROSS APPLY LiveQueryStats T3

The query will return 1 billion rows (, and will run for a while. This gives you the opportunity to look at the different features without rushing or losing your running query.

Live execution plan
Before running the query above, you need to enable the Live Query Statistics just like you would do to the normal execution plan. This is an extra button added in the SQL Server 2016 SSMS (and the downloadable version of course). When that’s enabled, you can execute the query, and SQL Server Management Studio (SSMS) will automatically switch to the live execution plan.

In this execution plan, you’ll see the data flow through the components, so you can actually see what SQL Server is doing at a specific moment:

Query completion percentage
Another great addition is the overall completed percentage, that you can find at the bottom of your SSMS. This shows you the percentage of completion, that can help you estimate the time till completion:

Live rowcount
The properties window normally shows you the in-depth information of your query after completion (number of threads, memory grant, etc). With the live execution plan enabled, it can provide you with real-time statistics as well:

Live execution plan from Activity Monitor
From the Activity Monitor you can also open the live execution plan of running queries on your instance. In the Activity Monitor, you have an additional tab in SQL Server 2016 called “Active Expensive Queries”. In this tab you can right-click on a running query, and click on “Show Live Execution Plan”. This opens a new tab in SSMS with the execution plan:

Unfortunately this only works for queries that have the live statistics enabled before execution. I don’t expect this to change in the final product, mainly because of the negative performance impact this feature can have on your queries and instance.

Drawbacks, Limitations and Bugs
Because this is only a CTP version of SQL Server 2016, we can expect some bugs and limitation, so I’ll be the last one to judge. And I don’t think these limitations are a big drawback on the feature.

One of these bugs is a crashing SSMS. If you open a live execution plan from the Activity Monitor, and close that tab, SSMS crashes every now and then.

Another weird thing is that exiting SSMS by clicking on the close button (top right) when the live execution plan is open, causes the list of recently used SQL Servers (the list used when connecting object explorer for example) to be cleared for some reason.

This behavior is reproducible, so I think this is a small bug in SSMS, or the fact that I upgraded this instance from the first CTP version till the current version. But I’m sure this will be fixed by the SQL Server Team in the next releases.

The biggest pitfall (in my opinion) of this feature is also mentioned in the documentation. This feature is “primarily intended for troubleshooting purposes” and “can slow the overall query performance”. And I know, it look SO COOL to have this on all queries you’re running, but please be careful with this. Don’t enable this on every running query, but only use this to debug issues!

Another limitation for this feature (at least at the moment I’m writing this), is the use in combination with columnstore indexes, memory optimized tables and natively compiled stored procedures. You can read more about this here.

And I shouldn’t even have to mention this, but remember: you can only use Live Execution Plans when you have SHOWPLAN permissions on the database (same permissions you need to view normal execution plans).

One of the questions I had when I read about this: are there any alternative for this? One of the only things that come to mind is the “Track My Query” tool, written by Matan Yungman (Blog | @MatanYungman). This tool allows you to monitor your query, and it will show you what part of your query is currently running. For more in-depth information, I recommend the SQLBits session where he explains the inner-workings of this tool. I’ve seen this session in person, and it was an interesting session.

With all these awesome new features that will be shipped in SQL Server 2016, I think there’s a whole new way of looking at SQL Server as a product. I’ve always liked working with SQL Server, but this version is taking it to the next level.

With all these new features, debugging issues is going to be a bit easier, developing new stuff is going to be faster, and the overall usability is going to skyrocket. I can’t wait to get my hands on the finished product!