Write readable and high-performance queries with Window Functions

In SQL Server 2008, we gained a new and powerful feature in our bag of T-SQL tricks: Window Functions. The actual Window Function is the OVER clause, that allows you to determine partitions or “groups” in your query, before applying another function. In practice, this means you can create groups in your dataset, that can be molded further by applying functions to that groups.

In the past, I’ve written a number of posts about Window Functions in SQL Server:

Row_Number: Unique ID in select statement
Calculating Running Totals
Removing duplicates in a dataset

But there’s more!

 
Finding missing numbers in sequence
How many times did you look at a table, and you noticed that one of the values in a sequence is missing? Or a date range in a table, where someone deleted a few records. So all of a sudden, your year doesn’t have 365 days (or 366 in a leap year, or 365.242199 days which is the years average), but 315 days. There go your management reports that are calculated on a per day average. So how do you find the missing dates, without having to write a cursor or create a full blown date table like your BI colleagues do? You guessed right: a query with Window Function!

Let’s start with declaring a table, and insert a sequence with missing values:

DECLARE @Sequence TABLE
  (Number INT)


INSERT INTO @Sequence
  (Number)
VALUES
  (1),
  (10),
  (7),
  (4),
  (2),
  (8),
  (5)

 
So how can we get the “nearest numbers” from that table with just a single select statement?

SELECT
  LAG(Number, 1, 0) OVER (ORDER BY Number) AS LAG_Value,
  Number,
  LEAD(Number, 1, 0) OVER (ORDER BY Number) AS LEAD_Value
FROM @Sequence AS S
ORDER BY Number ASC

 
The LAG and LEAD are standard t-sql functions from sql server 2012 on. These functions give you the opportunity to access the previous or next row, without the need for a so called “self-join”. So what you see is the number, the record preceding that value (LAG) and the following value. So in this case, number 2 is preceded by Number 1, and followed by Number 4.

 
The lemonade stand
Now let’s look at another example. How about you? When you grew up, you wanted to save money for a new mobile phone right? In my case it was either a disc man, a Walkman, or a stereo set. But let’s stick with the modern equivalent of the Walkman for now: the MP3 player. So to earn money for the MP3 player, our fictitious friend who is called Joe, decides to start a lemonade stand. He needs to save up at least $150 to buy a new MP3 player. So every glass of lemonade he sells is accounted for, and at the end of the day he sums up all his
earnings, and puts it into a table:

DECLARE @Profit TABLE
  (DayNumber INT,
   Sales DECIMAL(10,2))


INSERT INTO @Profit
  (DayNumber, Sales)
VALUES
  (1,  6.90),
  (2,  4.17),
  (3,  2.69),
  (4,  7.26),
  (5,  2.93),
  (6,  8.98),
  (7,  7.25),
  (8,  5.88),
  (9,  1.51),
  (10, 7.97),
  (11, 3.44),
  (12, 3.76),
  (13, 9.96),
  (14, 0.92),
  (15, 8.28),
  (16, 6.05),
  (17, 9.40),
  (18, 4.03),
  (19, 9.14),
  (20, 7.25),
  (21, 0.06),
  (22, 9.12),
  (23, 7.39),
  (24, 6.57),
  (25, 4.54),
  (26, 0.09),
  (27, 4.42),
  (28, 9.53),
  (29, 5.09),
  (30, 0.89)

 
So as you can see, he earns quite a lot of money this way! But because he’s eager to buy his new MP3 player, he wants to see his day totals, and the amount he needs to buy his new toy. But because Joe is a really smart guy, he doesn’t want to do this with a lot of self-joins, and he wants his results fast. So looking at performance, what is the easiest way to query this data? How about this:

DECLARE @Goal DECIMAL(10,2) = 150.00


SELECT
  DayNumber,
  Sales,
  @Goal - SUM(Sales) OVER(ORDER BY DayNumber
                  ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS MoneyNeeded
FROM @Profit

 
He declares a “goal” variable, that is set to the amount he needs for his new gadget. So for every row, we calculate the sum of that day, plus all the sales he made from the first day (UNBOUNDED PRECEDING) and today (CURRENT ROW). After day 28 he has earned enough to buy his MP3 player. But now he wants to know what his average sales were. So he calculates the average of his sales, based on every sale he’s made so far:

SELECT
  DayNumber,
  Sales,
  AVG(Sales) OVER(ORDER BY DayNumber
                  ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS MoneyNeeded
FROM @Profit

 
So where does it stop?
So now that we’ve seen the SUM and AVERAGE option, what do we have left? How far can we take this? Thinking about it, how about a daily checkup if we hit a lowest or highest Sales amount? We can do this with the MIN and MAX option on the same query:

SELECT
  DayNumber,
  Sales,
  MIN(Sales) OVER(ORDER BY DayNumber
                  ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS LowestSale
FROM @Profit


SELECT
  DayNumber,
  Sales,
  MAX(Sales) OVER(ORDER BY DayNumber
                  ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS HighestSale
FROM @Profit

 
Now Joe can save his daily sales in the evening, and check if he had a very good, or a very bad day.

 
Not only for the average Joe…
So how can the business profit from all this? In some cases, Window Functions make it easier to output a certain resultset. In some cases it even gives you a whole new way to output data with a well performing (single) query, that was impossible until now. So if you’re running SQL Server 2008 or higher, start using (or at least start exploring) Window Functions right away!

 

To generate the random floats, I’ve used the generator of FYIcenter.com

If you want to read more about this topic, don’t forget to check out these blog posts:

- Julie Koesmarno: ABC Classification With SQL Server Window Function
- Mickey Stuewe: A Date At The End of The Month
- Chris Yates: Windows functions who knew

How SQL Search saves you time

Since 1999, Red Gate Software has produced ingeniously simple and effective tools for over 500,000 technology professionals worldwide. From their HQ in Cambridge UK, they create a number of great tools for MS SQL Server, .NET, and Oracle. The philosophy of Red Gate is to design highly usable, reliable tools that solve the problems of DBAs and developers.

Every year Red Gate selects a number of active and influential community members (such as popular blog writers and community site owners) as well as SQL and .NET MVPs who are experts in their respective fields, to be part of the Friends of Red Gate (FORG) program. I’m proud to announce that I’m part of the 2014 FORG selection. This post is a part of a series of post, in which I try to explain and show you why the tools of Red Gate are so loved by the community.



 
Red Gate? No thank you!
One of the biggest prejudices of the tools from Red Gate is: you have to sell one of your kidneys, in order to afford one of their tools. I agree with you, some of the tools Red Gate sells are pretty expensive. Especially when you need to buy them yourself. But what if I tell you they pay for themselves in the long run? You don’t believe me? Okay, let’s start of with a free tool to convince you.

 
How SQL Search can save your bacon
As a DBA and SQL Server developer, one of your biggest challenges is to memorize your whole environment, and learn every line of T-SQL there is to find from the top of your head. I’m sorry? Oh, you don’t do that? Well, that doesn’t really come as a surprise. And if you thought: “Hey! I’m doing that too!”, stop wasting your time! No DBA, wherever you will look in the world, will EVER remember all of the T-SQL script, stored procedures, views, etc, that can be found in his or her environment. But where do they get their information from? What would you say if I told you you could search though all objects in a specific database, or on a specific instance?

So how do you do that? You’re planning on making your own script to search through the system view sys.columns for column names, and sys.procedures for Stored Procedure text? There is an easy way out, you know!

 
Installing and testing
In order to create a test case that you can repeat on your own SQL Server, I’m using the AdventureWorks2012 database. That’s a free example database that you can download from CodePlex. I’ve chosen to download the backup file, that I restored on my local SQL Server.

The installation of SQL Search can be found on the Red Gate SQL site. When you installed SQL Search, a button will be added to the SQL Server Management Studio (SSMS):

 
So, what do you actually get? If you click the button in SSMS, you’ll see a new window, that’ll act as a result pane. The top of the new screen looks like this:

 
This screen contains all possible options you will need to find objects either on your server, or in a specific database. You see a textbox on the left, where you fill in your search term. The next few options can be used as a filter. You can search on an exact match (checkbox), search on a specific object only (dropdown list), and on a specific database (dropdown list). The last dropdown list is the instance you want to search on. This list is populated with the open instance connections from your object explorer.

 
Test case
So how does this work in practice? One of your colleagues comes up to you, and asks you what objects in your database are related to department data. You could search for documentation in order to answer that question, or you could let SQL Search give you the answer. So, let’s search the AdventureWorks2012 for objects that are related to, or contain department data. The result will look like this:

 
As you will see there are 18 objects that are related to your search term (the count of object is visible at the bottom of the search results on the right). Some objects are shown multiple times, because there are multiple matches on that object. For example the “vEmployeeDepartment” view. The name of the view contains our search term, one of the columns is called department, and the text of the view (create script) contains your search string.

But how does this work with real life situations? How many times do you get the question from your colleagues how many objects are related to a specific table or column? As a DBA you probably get this question more than you would like. Your developers want to rebuild an application, or add a new feature to it, but they’re not sure if the database change they’ll make will break another applications or processes.

It’s also possible for you to use the tool to your own advantage. For example, when you want to know which object can update a specific employee record. You just search for both terms, and you’ll see that there are 3 stored procedures that update employee information:

 
Please hold on, while I search for the object…
…is a sentence you never have to use again when working with SQL Search. Whenever you found the object you were looking for, you can just double click on it, or use the button between the result pane, and the script pane:

This will look up the object in the object explorer. So you never have to look for an object after you found it. Just let SQL Search do all the hard work.

 
Red Gate? They’re AWESOME!!!
Hopefully you changed your mind about Red Gate tools, after reading this article. This is one of the tools I personally use on a daily basis. Even though there is documentation within your company, you still need to find it. It’s printed and laying around somewhere, or on an intranet or SharePoint. You know where you can find it, except when you REALLY need it!

SQL Search is also a tool you (in my opinion) really need when you’re new to a company. You’ll see a lot of different databases, with different purposes, and maybe even different DBA’s responsible for that specific database. Using SQL Search gives you a great advantage when you need to have a chat with the DBA. You’ll step into their office with a little more knowledge of the database, without reading endless documents, cryptic ERD’s and folders full of unnecessary documentation.

 
Feedback
When you start using the tool, don’t forget to thank the people from Red Gate. They LOVE to hear your feedback, either in a tweet, the Red Gate Forums, or by contacting support. You could also send me a mail or tweet, or leave a comment at the bottom of this post. I would love to answer your questions (as far as I can), or pass them on to Red Gate.

If you want to read more about SQL Search, don’t forget to check out these blog posts:

- Julie Koesmarno: SQL Tools Review: SQL Search
- Mickey Stuewe: On a SQL Quest using SQL Search by Red Gate
- Chris Yates: Headache + Pain Red Gates SQL Search

T-SQL Tuesday #51 – Place Your Bets

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Place Your Bets”. If you want to read the opening post, please click the image below to go to the party-starter: Jason Brimhall (Blog | @sqlrnnr).



 
When I read about this months T-SQL Tuesday topic, the first thing that came to mind was things that you know will go wrong sooner or later. When you encounter a situation like this, you immediately know this can’t last forever. You want to fix it when you see it, but there’s no money, or there’s no time at that moment. But they promise you, in a few weeks you can take all the time you need. Well, that’ll never happen. Until things go wrong, and you can clean up the mess. Sounds familiar? Yes, we’ve all seen this, or will see this sooner or later.

 
With power comes great responsibility
Just imagine this with me. One of your colleagues asks you to look at a problem he’s having with a script someone in your company wrote. You probably solved it while he was standing right next to you. He watches you solve the problem, and when it’s solved, he walks away with a thousand-yard stare in his eyes. You don’t really think about it when it happens, but it’ll come to you…

A few weeks later, it’s 10 AM and you’re still having your first coffee of the day, the same developer asks you to look at “his script”. Wait, what?! Yes, he watched you work your magic, and that funny language of “Es-Que-El” seemed easy to learn. So he bought himself a “SQL Server for dummies”, learned all he needs to know in only a weekend, and wonders why it took you so long to learn it. From now on, he can write his own scripts, so he doesn’t need you anymore. Except for this last time.

Opening the script scares you: it’s a cursor. But in your frustration and amazement you “fix” the broken script, by refactoring his select statement in the cursor. Because the cursor only collects data, you add a “TOP 10″ clause in the select statement, and run the script as test. Nice, it finishes is 25 seconds. “It will only consume 500 rows” is the last thing you heard him say. You send the guy off, so you can continue your own work.

Later in the day, it’s about 4 PM, you meet the same guy at the coffee machine. He starts a discussion about how he needs a new PC, because the script YOU wrote is slow (see where this is going…?). It’s running for about 4 hours now, while it should only collect about 500 records. I know what you think: that’s impossible. You walk with him to his desk, stop the script, and look at his code. That isn’t the query you looked at this morning. Asking your colleague about it explains it all: he “slightly refactored” the script, because he didn’t need al those weird statements to get him his results. Well, after a fiery discussion of a few minutes, you explain him the DOES need the “FETCH NEXT” in the query, because the query now ran the same statement for only the first record in the select statement you declared for your cursor.

So this funny “Es-Que-El” language, isn’t that easy to learn. A beautiful quote about that, and I’m not sure who said that, says: “T-SQL is easy to learn, but hard to master”. So putting your money on one horse, in this case buying yourself a book, isn’t a good idea.

 
Putting your money on one color
Another great example is a company that had a wonderful Business Intelligence environment. They used the whole nine yards: SQL Server, SSIS, SSAS, SSRS, etc. The downside of that you ask? It was all hosted on 1 physical machine, on a single SQL Server instance. Oh, and it was running low on disk space, and there was no room in the chassis to put in extra disks. That’s right: it was like juggling burning chainsaws with only one hand. Or an interesting challenge, if you will.

Eventually we hosted a few databases on NAS volumes. At that point, I was told the databases we moved were less important. Pro tip: never EVER trust them when they say that!!! They forgot to tell me the biggest database of the moved databases wasn’t in the backup plan (500 GB database takes a long time to backup), and the last backup was made over a year ago. Surprise, one night the network card failed for maybe only a microsecond, and SQL Server thought the LUN was offline or the disk crashed. So SQL Server said that the database was corrupt, and that the datafiles were unavailable. After a few hours, a reboot of the server fixed it, and SQL Server could see the disk volumes again. So the database was saved after all.

But you see where I’m going with this? You never know when things go wrong, and putting all your money on one color when playing roulette isn’t the best idea. If the hardware of your single server fails, you fail.

 
Next, Next, Finish?
But the biggest example I can give you of a bad placed bet, are companies that work with SQL Server, but don’t hire a DBA. Have you ever worked for a company that work with Oracle? Every single company that works with Oracle, has a dedicated Oracle DBA. But have you ever wondered why that isn’t the case when a company works with SQL Server?

Thinking about it, I guess this is because a successful SQL Server installation is only a few “Next, Next, Finish”-mouse clicks away. So if the installation is so easy, every developer or person with IT experience can administer it probably. They couldn’t be more wrong. You know that, I know that, every SQL Server professional knows that, but try to convince other people of that fact.

So the worst bet you can place, and this is how I write myself back to the subject of this month, is not hiring a professional to manage your data and data stores. You wouldn’t let your local baker fix your car, because the wrote some books about cars, right? So why do you let a developer with basic knowledge near your SQL Server? Just because real DBA’s cost money? Yes, we do cost some serious money. But in the end, at least when you hire a GOOD DBA, they will make you money. You don’t think so? What does a DBA cost per hour? And how much money do you lose when your servers are down for just an hour?

SSIS – Remove empty rows from Excel import

From the first time that I started SSIS, I started to love it. In most cases it’s easy to create a package, easy to understand, and even readable for people who don’t “speak fluent SQL”. But what if you want to perform an easy task, and the result isn’t what you expect?

 
Formatting an Excel sheet
One of the most basic tasks you can create in SSIS, is importing an Excel sheet. Most of the time this works like a charm. But in my case, I wanted to filter out some rows from the workbook.

The business delivers an Excel sheet, that needs to be imported into the database. But because they don’t have the technical knowledge we have, they don’t know how important the format of the file is. They sent us this file (I’ve created a smaller sample, so it’s easier to read and understand):

 
The first thing you’ll notice as a data-professional is the 2 empty rows in the sheet. Beside that, we have an employee without a name. You know this is going to cause problems when you see it. These errors are easy to spot in the example, but imagine if these 2 rows are hidden in a dataset with 50.000 or more rows. So even though they might ended up there accidentally, your process is going to fail.

 
When you add an “Excel Source” to your package, and you look at the preview of that import, you immediately see the problem:

 
Table structure
In order to determine what columns can be left blank, and what columns can’t be NULL, I looked at the table structure:

CREATE TABLE ResultSSIS
  (ID INT IDENTITY(1, 1),
   FullName VARCHAR(50) NOT NULL,
   Department VARCHAR(50) NULL,
   EmployeeNumber INT NOT NULL)

 
So in the dataset, FullName and EmpolyeeNumber are mandatory, and Department is optional. With this in mind, I started to work on a way to exclude those rows.

 
Import without excluding
The first thing I tried is to import the file, and see what the results are. Because I knew the data wasn’t correct, I didn’t want to import the Excel sheet into a SQL Server database just yet. So as a destination, I used the “recordset destination” control in SSIS. Importing the data into this memory table also allowed me to use the “data viewer” to see the imported data, without the need to truncate a table after each run. You can enable the “data viewer” by right-clicking the import-connector (the arrow between controls), and click “Enable Data Viewer”:

 
If you run the SSIS package in debugging mode, you’ll see the data that is imported in a pop-up window:

 
As you can see in the screenshot above, the records with NULL values in it are included in this import. So which records do we want to exclude, based on our table structure?

 
So from the 6 records in the Excel sheet, we want to exclude 3 in our import because of NULL values. But how do we do that? The easiest way to solve it, is to import it into a temp table, delete the NULL records, and insert the other records in the destination table. But what if that isn’t possible, and you want to filter the records in your import? I’ve chose to use the “Conditional Split”.

 
Conditional Split
You don’t have to rebuild your whole package, when you want to exclude records with the “Conditional Split”. You can just add this control, at least in this case, in between you source file and your destination. If you open the control, you can add an expression that is used to filter records. In my case, I wanted to exclude the rows with an empty “FullName” and “EmployeeNumber”:

 
When connecting your “Conditional Split” to your destination, SSIS will ask you what output the “Conditional Split” needs to return. To output the entire set without the empty rows, chose the “Conditional Split Default Output”:

 
When you run your package with the extra “Conditional Split” (and you enable Data Viewer again), you’ll see the filtered output of the “Conditional Split”. The 3 NULL records are excluded like expected:

 
Conclusion
SSIS is easy to use, and yet a a really powerful tool. Even if you build your processes in SSIS, it’s not always necessary to rebuild your whole package. Sometimes you can save the day with just a minor change. That’s the power of SSIS!

T-SQL Tuesday #49 – Wait for it…

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Wait for it…”. If you want to read the opening post, please click the image below to go to the party-starter: Robert Davis (Blog | @SQLSoldier).



 
Explaining developers how SQL Server works is something we all do. Maybe not on a daily basis, but you’re asked questions like “why is my index not working”, or “what’s the best way to add multiple columns to a table”. And most of the time, these questions lead to whole other bunch of questions you need to answer. And the one question we all are asked more than once: “why is my query running slow?”. So where do you start explaining?

 
Wait Types
There are lots and lots of wait types that can be found in SQL Server. In SQL Server 2005 there are 230 different wait types, 475 in SQL Server 2008 and 491 in SQL Server 2008 R2. In SQL Server 2012 they added another 197 new ones to the list. The wait types can be found by running this query:

SELECT wait_type
FROM sys.dm_os_wait_stats
ORDER BY wait_type ASC

 
These wait types can tell you what SQL Server is doing to execute your statement, and what the possible delays are. I’m not going to sum up all the wait types, but here’s a short list of common wait types you’ll see on your SQL server:

 
SOS_SCHEDULER_YIELD
Yielding processor time

LCK_M_*
Waiting for a lock

OLEDB
Wait on the OLEDB provider (Linked servers, Full-Text Search)

WRITELOG
Writing transaction log to disk

RESOURCE_SEMAPHORE
Waiting for a query memory grant

CXPACKET
Query parallelism

PAGEIOLATCH_*
Latch on a memory address while data is retrieved from disk

LAZYWRITER_SLEEP
System process waiting to start

 
All these different wait types could indicate a problem with your statement or the server. Some are more informative, while others show you a real issue. But what I really would like to show you, is how you can find these wait types.

 
DIY or just ask for help…
One of the ways to find the wait types on your SQL server, is to dive into the seemingly endless list of DMV’s. You could use the “sys.dm_exec_requests” and “sys.dm_os_waiting_tasks” DMV’s to find what you want, or you could take the easy way out: sp_WhoIsActive by Adam Machanic (Blog | @AdamMachanic ).

Adam (also the party starter of T-SQL Tuesday) wrote a no less than brilliant script to find problems on your server. But how does it work?

 
Installing sp_WhoIsActive
The “installation” of sp_WhoIsActive couldn’t be easier. You just need to download the script, and run it. This creates a stored procedure in the database of your choice. Usually I just create it in the master database. But if you have a DBA database with useful scripts, it’s okay to create it there.

 
Running it for the first time
The stored procedure can be executed without any parameters. That way, you use the default options. Just run the statement shown below:

EXEC master.dbo.sp_WhoIsActive

 
If you need it, or just like to see more information, you can also configure the procedure with a lot of parameters. If you want to see all the options you can configure, just set the documentation parameter to 1 (true):

EXEC master.dbo.sp_WhoIsActive
  @help = 1

 
The options
If you start using sp_WhoIsActive more and more, you’ll get your own set of favorite options. It all depends on the purpose you’re using the procedure for. Most of the time, I use it to determine why queries run slow, or why the performance of the SQL server is so low.

The information sp_WhoIsActive retrieves gives you a good indication of what SQL Server is doing, or what queries are bugging each other. I’ll list my favourite options below:

First, I set @show_own_spid on, so I can see my own query in the resultset.

The second option I love is @get_plans. This shows you the execution plans of the running queries:

 
Another great parameter to set is @get_outer_command. That way, you won’t just see the query running at the moment, but also the outer-command of the query (in the example below, the INSERT INTO the temp table is executed from within the stored procedure you see in the right column):

 
To see which transaction logs are used when running your query, set @get_transaction_info to 1:

 
Information regarding locks can be found, by setting @get_locks to 1:

 
If you click the XML, you’ll see which locks are granted, pending or denied:

 
The last option I’d like to set, is @get_additional_info. This will show you more information regarding the connection settings, session variables, etc:

 
Clicking the XML shows you the properties I mentioned above:

 
So this is what the query looks like, the way I personally like to use it:

EXEC master.dbo.sp_WhoIsActive
  @show_own_spid = 1,
  @get_plans = 1,
  @get_outer_command = 1,
  @get_transaction_info = 1,
  @get_locks = 1,
  @get_additional_info = 1

 
Conclusion
Wait types are your keys to open the door of the next level of SQL Server. Not all wait types are that easy to read and understand, but there are plenty of resources to be found online. For example, just take a look at the rest of the posts today. Most of the posts for T-SQL Tuesday can be found on Twitter, when you search for #TSQL2sDay.


I want to say thanks to the employees at Coeo for the easy explanation of some of the wait types!

Incremental updates with Change Data Capture

When designing a database or ETL process, for example loading your production data into a reporting environment, you always start your design with performance in mind. In the beginning of the project, your scripts and ETL run blazing fast. But after a few months in production, the entire project grinds to a halt. But how do you fix that problem, without a complete redesign of your applications and database? One of the many solutions is an easy one: incrementally load your data into the destination tables.

 
Change Data Capture
Incremental data loading could be a hard nut to crack. It’s not always an option, but it might be a good point to start from. One of the ways to start loading your data incrementally, is by using the keys in your database as a reference. If your table has a column called “Modified Date”, and that is updated every time the record is updated, you could use that. Every night, when the process runs, you just add the records that were modified after the last successful process run. But what if you don’t have that possibility? Change Data Capture (CDC) is an easy way out.

 
CDC is a way to record inserts, updates and deletes on a specific table, without the need of writing the triggers yourself. CDC reads the transaction log, and captures all changes made to the specific table. These changes are stored in the associated change table, that is created by CDC.

Below I’m going to show you how to setup your first table with CDC.If you would like to know more about CDC, this TechNet article is a place to start.

 
Create an example
To show you the basics of CDC, let start with creating a table called TestCDC in the database called Sandbox:

USE Sandbox
GO

CREATE TABLE dbo.TestCDC
  (ID int IDENTITY(1,1) NOT NULL PRIMARY KEY,
	 Descr varchar(50) NULL)
GO

 
Once you’ve created the table, turn on CDC at the database level, by execution the system stored procedure created to do that:

EXEC sys.sp_cdc_enable_db

 
There is also a system stored procedure to enable CDC on the table level. You need to enable CDC on tables manually, and separately for every table you need:

EXEC sys.sp_cdc_enable_table
  @source_schema = 'dbo',
  @source_name = 'TestCDC',
  @role_name = NULL

 
If the SQL Server Agent is running on your machine or server, you’ll see this confirmation (I’ll explain later why SQL Server Agent is needed):

 
If the Agent isn’t running, you’ll see this warning:

 
If you ran the enable table statement, you will see that SQL Server created the system objects needed to track changes in the table:

 
Because CDC uses 2 SQL Server Agent jobs to capture and cleanup the change tables, you need to run the Agent to start the data capture. If the jobs aren’t running, SQL Server won’t capture any changes made:

 
Start data changes
In order to see what happens when you change data, let’s insert some records:

INSERT INTO dbo.TestCDC
  (Descr)
VALUES
  ('This is a description')

INSERT INTO dbo.TestCDC
  (Descr)
VALUES
  ('This is a description too...')

 
And let’s update one of those 2 inserted records:

UPDATE dbo.TestCDC
SET Descr = 'UPD - ' + Descr
WHERE ID = 2

 
Now, let’s check the content of both the original table, and the change table:

/* Original table */
SELECT * FROM dbo.TestCDC

/* Change table */
SELECT * FROM cdc.dbo_TestCDC_CT

 
If you run both queries, you’ll see the resultset below:

 
The records in the CDC change table allow you to update the data in your reporting environment. You could query them yourself, by retrieving all the changes since your last update. You can also use the procedures that return those changes for you, for example the cdc.fn_cdc_get_net_changes_. You can read more about the system function here.

 
Cleaning up after an update
Now that you’ve updated your reporting environment, it’s a wise thing to cleanup your CDC data. You could also drop the records yourself with a DELETE statement. Another option is using the system procedure for that: “sys.sp_cdc_cleanup_change_table”. You can clean your data using the following SQL statement:

DECLARE @Last_LSN VARBINARY(10) =
  (SELECT MAX(cdc.dbo_TestCDC_CT.[__$start_lsn]) FROM cdc.dbo_TestCDC_CT)

EXEC sys.sp_cdc_cleanup_change_table
  @capture_instance = 'dbo_TestCDC',
  @low_water_mark = @Last_LSN,
  @threshold = 5000

 
The query will retrieve the last LSN (Log Sequence Number), and remove everything that happened before that.

 
Cleanup of CDC
If you want to completely remove CDC (because we’re done testing), you can disable it on the table level by running the query below:

EXEC sys.sp_cdc_disable_table
  @source_schema = 'dbo',
  @source_name = 'TestCDC',
  @capture_instance = 'dbo_TestCDC'

 
The statement will cleanup all the objects that were created to enable CDC on that specific table. But the statement will only stop the CDC on the specific table. The fastest way to disable CDC on all tables in the database, is disabling CDC on the database level, by running the query below:

EXEC sys.sp_cdc_disable_db

 
Conclusion
Loading data always takes time, and there are many factors that are important: your database size, your frequency of changes, your ETL process, etc. The time it costs you to move data can be changed by rewriting your process to incremental loads. CDC is one of the many ways to achieve this. It works out of the box, and doesn’t require you to build any process yourself. But maybe your environment needs a custom process to operate the way you want it to. Not every feature in SQL Server is a so called silver bullet, but sometimes it comes darn close to one…

T-SQL Tuesday #48 – Cloud Atlas

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Cloud Atlas”. If you want to read the opening post, please click the image below to go to the party-starter: Jorge Segarra (Blog | @SQLChicken).



 
In the last few years, “the cloud” has become more and more important in our lives. Not only in IT, or as a database- or data-professionals, but also in our personal lives. Take a look around you. How many people do you still see, carrying around a USB drive to store their data? Now do the same, and count the people that use a cloud solution for their data, like Dropbox, SkyDrive (if we are still allowed to call it that…), or Copy.com?

So everyone is storing their data in the cloud now. From personal information like a copy of a passport, to text files with peoples password lists. So without jumping to conclusions just yet, I guess we trust the companies that hold our data right…?

 
Trust
But now comes the hard (and controversial) part: we trust them with our personal data, but not our corporate data. It’s okay to store your passwords and private documents in the cloud, but it’s unthinkable that you store business data in the cloud!

So where is that distrust coming from? It probably has something to do with the whole NSA-thing. There, I said it! Without completely going off-topic, I would like to explain something about this statement.

My personal opinion is that people in the Netherlands are different from the rest of the world, when it comes to their privacy. They don’t care if the ISP is monitoring web traffic. They know it’s being monitored, but they accept that as a fact. When it comes to downloading games, music or movies, they think their entitled to that. But when it comes to government agencies monitoring the corporate data they put in the cloud, they draw the line.

 
Are you… the one…?
In the past few years, the discussion about on premise and off premise data intensified. People try to convince each other with arguments, and think the other is completely wrong.

A while ago, I encountered my first “cloud-company”. I’ve done some consulting for them, and they’ve set themselves the goal to move to the cloud within the next few years. The biggest advantages they see are costs, scalability and administration. And I fully agree with them.

 
Why use a cloud solution
Choosing a WASD (Windows Azure SQL Database) solution makes it easier to maintain your environment. You don’t have to monitor the hardware, and move to another server if your hardware fails or dies. This is all being taken care of by Microsoft.

Looking at the cost of a cloud solution is easy: it saves you money. When you run on premise servers, where you need a data center, electricity, maintenance team, etc. When you use a cloud solution, you only pay for the hardware you need. And if you’re done with it, you can just shut down the machine you were working on.

The same goes for scalability. For example, if you need to run a billing process, you could “spawn” twice as many cloud machines. This makes scalability a piece of cake. And again, when your done, just get rid of the machines you don’t use anymore. This makes it easier for companies to run big processes in a smaller amount of time.

 
Trying it out
The only time I’ve used WASD is on the machine that Jamie Thomson (Blog | @jamiet) made available to the SQL Family (read about it here). This was later taken over by Red-Gate, but I’m not sure this is still available.

But if you want to try it out, just create your own AdventureWorks on Azure. You can download the scripts here, and start your Azure trial here.

Data paging using offset

With every new release of SQL Server, we get to use new features that make our lives as developers and administrators so much easier. A few days ago, I came across an old piece of code (SQL 2005 if I remember correctly), that I used to page data for a CMS I build when I was still a web developer.

The company I worked for needed a new website, and wanted an HTML-editor to edit content on the website. This content was stored in SQL Server, and was retrieved by the website. With a few business rules we decided which content was visible, and which content was hidden from the website.

One of the features of the website was a news feed. But because there were so much news articles, we needed to show the top 10 articles on the first page and let the user click through to the next page of articles. But because we had so much news in the database, we needed to page this data. Every page should show 10 new articles, and we needed to switch pages for the rest of the news articles.

 
Creating the sample data
In order to show you the problem and solution, we need to create a sample table:

CREATE TABLE dbo.Paging
  (ID INT IDENTITY(1,1),
   Title VARCHAR(50),
   Content VARCHAR(50))

 
The test data we need to insert looks the same for every record:

INSERT INTO dbo.Paging
  (Title, Content)
VALUES
  ('This is an article', 'This is the content')
GO 50

 
This script will insert 50 record in the Paging table.

 
The old way
In older versions of SQL Server you needed to build your own solution to solve this problem. Let’s assume you clicked the news feed button on the website, and we want to switch to page 2 of the results. The solution I build back then looked something like this:

DECLARE @RowsToShow INT = 10,
        @RowsToSkip INT = 10


SELECT TOP(@RowsToShow)
  ID ,
  Title ,
  Content
FROM dbo.Paging
WHERE ID NOT IN
  (SELECT TOP(@RowsToSkip) ID FROM dbo.Paging)
ORDER BY ID ASC

 
In the query above, you can see we skip the first 10 rows, and retrieve the next 10 rows after that. That means that you need to remember on the website which records were retrieved already, and which records you want to see. The easiest way to do this, is by selecting the ID’s you’ve already shown, and retrieve the next set of rows.

This means you get execution plans like this:

 
The new way
From SQL Server 2012 onwards, we can use a new feature that is called OFFSET. That feature allows us to “window” our dataset, and retrieve a small subset of data without using a TOP and subquery, like in the example above. The new query would look like this:

SELECT
  ID ,
  Title ,
  Content
FROM dbo.Paging
ORDER BY ID ASC
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY

 
In the query above, you can see an offset of 10 rows, and a fetch of 10 rows. This means that it skips the first 10 records, and retrieves the next 10 records after that. But how can you get this to work with dynamic resultsets and pages? This is one way to do it:

DECLARE @RowsPerPage INT = 10,
        @PageNumber INT = 2

SELECT
  ID ,
  Title ,
  Content
FROM dbo.Paging
ORDER BY ID ASC
OFFSET ((@PageNumber - 1) * @RowsPerPage) ROWS
FETCH NEXT @RowsPerPage ROWS ONLY

 
The offset is calculated by taking the @PageNumber parameter minus one to retrieve the page we want to see. If we wouldn’t do that, the offset would always skip the first 10 records. Then we multiply that number by the @RowsPerPage parameter, to calculate how many results we want to retrieve.

After that, we use the @RowsPerPage in the FETCH NEXT clause to retrieve the number of rows we want (in this case the next 10). This results in a completely different execution plan:

 
As you can see, this has a lot less impact on SQL Server. This becomes really visible if you compare both in SSMS:

 
I/O Costs
Comparing query costs is nice when you quickly compare 2 different approaches for the same solution, but in the end it all comes down to I/O costs. So which query is the fastest solution, and how are we going to test that?

First of all, we need to measure the I/O calls. We do that by using a DBCC command called DROPCLEANBUFFERS. This allows us “to test queries with a cold buffer cache without shutting down and restarting the server”. SO DON’T USE THIS IN PRODUCTION!!!

So the complete testscript looks like this:

SET NOCOUNT ON
SET STATISTICS IO ON


DBCC DROPCLEANBUFFERS

--==================================================
DECLARE @RowsToShow INT = 10,
        @RowsToSkip INT = 10


SELECT TOP(@RowsToShow)
  ID ,
  Title ,
  Content
FROM dbo.Paging
WHERE ID NOT IN
  (SELECT TOP(@RowsToSkip) ID FROM dbo.Paging)
ORDER BY ID ASC
--==================================================

DBCC DROPCLEANBUFFERS

--==================================================
DECLARE @RowsPerPage INT = 10,
        @PageNumber INT = 2

SELECT
  ID ,
  Title ,
  Content
FROM dbo.Paging
ORDER BY ID ASC
OFFSET ((@PageNumber - 1) * @RowsPerPage) ROWS
FETCH NEXT @RowsPerPage ROWS ONLY
--==================================================

So we clean the SQL Server buffers, run the first query, clean the buffers again, and run the second query. Now the effect of the new statement is really obvious, if you look at I/O costs:

 
So the old version of the query (with the sub-select) scans the table twice, and reads 51 pages from the cache. The new approach (with the OFFSET) scans the table only once, and reads only 1 page from the cache.

 
Conclusion
The less I/O calls SQL Server needs to retrieve the result from disk or cache, the faster your query will run. In this case, we’ve tuned the query from 51 pages read to 1 page read. And we’ve only tested this on a table with 50 records, but the network traffic is decreased significantly. So here’s a lot of performance improvement from only one new piece of functionality. And there is a lot more out there.

T-SQL Tuesday #46 – Rube Goldberg Machine

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Rube Goldberg Machine”. If you want to read the opening post, please click the image below to go to the party-starter: Rick Krueger (Blog | @DataOgre).



 
This months topic is about being creative with SQL Server, and how you sometimes bend the rules a little bit, to fix a problem. This might not be the best solution, but it’s the only solution or quickest solution at the time. Everyone has a story like that, and so do I…

 
Just like a normal project: no budget
A few years back, I worked for a company as a web developer and team-DBA. One of our projects was to build a new warning system, so administrators and developers knew if something went wrong in the production environment. But the checks (like heartbeats, disk space checks, etc.) needed to be stored in SQL Server. But instead of creating budget for a new SQL Server, they told us to solve it with SQL Express.

The version we used was SQL Server 2005 Express, and all went well in the development phase. But the moment we tested the new system in the test environment, the data grew exponentially within a few hours. And after a few days, we stopped the test, because we couldn’t use a single database anymore. The checks generated close to 4 GB of data each month, and that’s the limit for SQL Server 2005 Express edition.

 
Solving it, MacGyver-style
So we needed to come up with a solution within a few days. And there was no possibility to change to a full SQL Server license. So we needed to find a solution, that worked with SQL Express. We finally solved it with a 3rd party tool, that was able to run SQL Server Agent jobs on the machine, which contained a single step, that started a stored procedure. This stored procedure contained a lot of dynamic SQL (yes, I’m sorry, but we had no other option at the time), that moved data into an archive database.

The job ran every night at a few minutes past midnight. The stored procedure first checked if there was data in the database, that was older than today. If so, it than checked if there was an archive database for that month. If there wasn’t a database, it created a database with a dynamic name: “Archive” + “_” + %ApplicationName% + “_” + %Month% + “-” + %Year%.

So now that we made sure there was an archive database, we moved the data from the day(s) before today to the archive database. The flow would look something like this:

 
Don’t try this at home kids!
So you can image that, looking back at this solution, I’m not proud of the path we were forced to choose. The technical solution however, is something that I look back on with pride. Back then I just started working with SQL Server, and didn’t have a lot of experience with building these types of solutions. But the solution we build was pretty stable. The only downside was, that if the job didn’t run at night for some reason, we needed to move the data by hand during the day. And because the database ran in “production” (there was a possibility of running transactions) we needed to move the data bit by bit, without locking the SQL server. This meant that if the job didn’t ran, I would spend most of the day moving data, waiting for that operation to finish, moving the next chunk of data, and so on.

So in the end, the man hours we put into it probably didn’t weigh up to a SQL Server license, but that would have made a cleaner solution. But in the end, the manager was right (unfortunately). We never found the time after that to perfect the checks, and the system administrators went with another 3rd party application, because it was easier to maintain. So a full SQL Server license would have been wasted money after all, if we couldn’t use that for another project. But looking back, it was a great experience to build it, and to design such a solution.

T-SQL Tuesday #45 – Follow the Yellow Brick Road

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Follow the Yellow Brick Road”. If you want to read the opening post, please click the image below to go to the party-starter: Mickey Stuewe (Blog | @SQLMickey).



 
When I read this months subject, I didn’t know what to write about, until I read one of the other blogs. So I’m sorry Mickey, but I’m using the subject this month as a guideline, and extend it a little bit. I’m glad I know Mickey, and I’m guessing she doesn’t mind me getting a little creative.

The writer of the post I read, talked about index fragmentation and data growth, and then I remembered a situation I encountered a while back.

When you come in to a company, you’d really like to see (and maybe even expect) to walk into a well organized team. A team that has procedures that work for them, and team members that agreed to solve problems in a specific way. But sometimes, it’s not…

 
Here you go!
On my first day as DBA at the company, I walked in and after a few conversations booted up my computer. Once it was booted, they gave me the name of a server, and wished me good luck. No documentation, no explanation of the databases, and no mention of who worked on the server. The only thing they said when they “handed over the key”, is “Here you go! From now on it’s your responsibility to keep the environment running and healthy!”.

So, there I was… No documentation, no health check, no one to tell me what the status of the server was…

 
Taking that first step is always the hardest
So the first thing I did was check out which databases were on the server. There were a lot: 70 in total. Then I checked the user permissions: most users were a member of the db_owner role. Great! Asking which users really needed all that permissions was useless. Everyone needed full access to everything.

So then I took the step to tell them that I was in charge of the server from that moment on, and that if they did something without my permission, I’d remove their access to the environment. Looking back, that was easier said than done.

But the first step that was really hard to accomplish, was reporting of the server health. No one thought that we needed that, and if there was something went wrong we would notice in an instant. And because I knew we really needed that, I just cleared my schedule, and created a few reports.

 
Being right is always a fun thing
A few weeks later, they finally saw I was right. One of the databases filled up a disk, and now we had issues. The only way we could see what went wrong, was looking at the data growth over the previous weeks in my reports. So now I could show them how wrong they were. It’s nice to be right once in a while!

 
Audit
This is were this months topic comes into play. A clear indication that you lack an audit trail, is when there’s an issue with one of your processes, databases, or even as small as a specific dataset, and you don’t know where the issue came from. Following your data flows from beginning to end is always a good idea. And if you work at a larger company, you’re (at least in the Netherlands) required to document your data flows for a financial audit every year.

But not only your datasets need audit trails. Your databases are in dying need of an audit trail as well. Because what happens if your database increases 50% in size over the course of a week, and you don’t track those kind of numbers? Your disk is filling up, your database stops working (if you haven’t got the space to extend your datafiles), and eventually your world is becoming a dark and evil place if you can’t fix it.

 
Conclusion
Even if you don’t have a full front-to-back audit trail of your data and processes, at least try to monitor and audit the key point in your data flow. That helps you debug excessive data growth, and helps you when (for example) a user creates a new database without asking your permissions. Even small audits help you out when you’re in a world of pain and trouble.

Follow

Get every new post delivered to your Inbox.

Join 34 other followers