Data paging using offset

With every new release of SQL Server, we get to use new features that make our lives as developers and administrators so much easier. A few days ago, I came across an old piece of code (SQL 2005 if I remember correctly), that I used to page data for a CMS I build when I was still a web developer.

The company I worked for needed a new website, and wanted an HTML-editor to edit content on the website. This content was stored in SQL Server, and was retrieved by the website. With a few business rules we decided which content was visible, and which content was hidden from the website.

One of the features of the website was a news feed. But because there were so much news articles, we needed to show the top 10 articles on the first page and let the user click through to the next page of articles. But because we had so much news in the database, we needed to page this data. Every page should show 10 new articles, and we needed to switch pages for the rest of the news articles.

 
Creating the sample data
In order to show you the problem and solution, we need to create a sample table:

CREATE TABLE dbo.Paging
  (ID INT IDENTITY(1,1),
   Title VARCHAR(50),
   Content VARCHAR(50))

 
The test data we need to insert looks the same for every record:

INSERT INTO dbo.Paging
  (Title, Content)
VALUES
  ('This is an article', 'This is the content')
GO 50

 
This script will insert 50 record in the Paging table.

 
The old way
In older versions of SQL Server you needed to build your own solution to solve this problem. Let’s assume you clicked the news feed button on the website, and we want to switch to page 2 of the results. The solution I build back then looked something like this:

DECLARE @RowsToShow INT = 10,
        @RowsToSkip INT = 10


SELECT TOP(@RowsToShow)
  ID ,
  Title ,
  Content
FROM dbo.Paging
WHERE ID NOT IN
  (SELECT TOP(@RowsToSkip) ID FROM dbo.Paging)
ORDER BY ID ASC

 
In the query above, you can see we skip the first 10 rows, and retrieve the next 10 rows after that. That means that you need to remember on the website which records were retrieved already, and which records you want to see. The easiest way to do this, is by selecting the ID’s you’ve already shown, and retrieve the next set of rows.

This means you get execution plans like this:

 
The new way
From SQL Server 2012 onwards, we can use a new feature that is called OFFSET. That feature allows us to “window” our dataset, and retrieve a small subset of data without using a TOP and subquery, like in the example above. The new query would look like this:

SELECT
  ID ,
  Title ,
  Content
FROM dbo.Paging
ORDER BY ID ASC
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY

 
In the query above, you can see an offset of 10 rows, and a fetch of 10 rows. This means that it skips the first 10 records, and retrieves the next 10 records after that. But how can you get this to work with dynamic resultsets and pages? This is one way to do it:

DECLARE @RowsPerPage INT = 10,
        @PageNumber INT = 2

SELECT
  ID ,
  Title ,
  Content
FROM dbo.Paging
ORDER BY ID ASC
OFFSET ((@PageNumber - 1) * @RowsPerPage) ROWS
FETCH NEXT @RowsPerPage ROWS ONLY

 
The offset is calculated by taking the @PageNumber parameter minus one to retrieve the page we want to see. If we wouldn’t do that, the offset would always skip the first 10 records. Then we multiply that number by the @RowsPerPage parameter, to calculate how many results we want to retrieve.

After that, we use the @RowsPerPage in the FETCH NEXT clause to retrieve the number of rows we want (in this case the next 10). This results in a completely different execution plan:

 
As you can see, this has a lot less impact on SQL Server. This becomes really visible if you compare both in SSMS:

 
I/O Costs
Comparing query costs is nice when you quickly compare 2 different approaches for the same solution, but in the end it all comes down to I/O costs. So which query is the fastest solution, and how are we going to test that?

First of all, we need to measure the I/O calls. We do that by using a DBCC command called DROPCLEANBUFFERS. This allows us “to test queries with a cold buffer cache without shutting down and restarting the server”. SO DON’T USE THIS IN PRODUCTION!!!

So the complete testscript looks like this:

SET NOCOUNT ON
SET STATISTICS IO ON


DBCC DROPCLEANBUFFERS

--==================================================
DECLARE @RowsToShow INT = 10,
        @RowsToSkip INT = 10


SELECT TOP(@RowsToShow)
  ID ,
  Title ,
  Content
FROM dbo.Paging
WHERE ID NOT IN
  (SELECT TOP(@RowsToSkip) ID FROM dbo.Paging)
ORDER BY ID ASC
--==================================================

DBCC DROPCLEANBUFFERS

--==================================================
DECLARE @RowsPerPage INT = 10,
        @PageNumber INT = 2

SELECT
  ID ,
  Title ,
  Content
FROM dbo.Paging
ORDER BY ID ASC
OFFSET ((@PageNumber - 1) * @RowsPerPage) ROWS
FETCH NEXT @RowsPerPage ROWS ONLY
--==================================================

So we clean the SQL Server buffers, run the first query, clean the buffers again, and run the second query. Now the effect of the new statement is really obvious, if you look at I/O costs:

 
So the old version of the query (with the sub-select) scans the table twice, and reads 51 pages from the cache. The new approach (with the OFFSET) scans the table only once, and reads only 1 page from the cache.

 
Conclusion
The less I/O calls SQL Server needs to retrieve the result from disk or cache, the faster your query will run. In this case, we’ve tuned the query from 51 pages read to 1 page read. And we’ve only tested this on a table with 50 records, but the network traffic is decreased significantly. So here’s a lot of performance improvement from only one new piece of functionality. And there is a lot more out there.

T-SQL Tuesday #43 – Hello, Operator?

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Hello, Operator?”. If you want to read the opening post, please click the image below to go to the party-starter: Rob Farley (Blog | @rob_farley).



 
Execution Plan
Everybody that ever got serious with SQL Server, knows about execution plans. And like Rob said in his opening post, if you don’t, start looking into that! But if you start working with execution plans, it gets confusing really fast. All those weird “building blocks” that tell you what SQL Server did with your query?! It almost looks like magic…!

But when you dive into it, they become easier to read and they suddenly are somewhat understandable. But hey, even after all those years looking at execution plans, some things still amaze me.

Last week I was asked to look at a query, that was used in an SSIS package. This package was one of the slowest from the nightly ETL processes. So I started of by looking at the execution plan:

 
Start pinpointing the first issue
Then I started digging, and I noticed this:

LEFT JOIN DB.dbo.T1 m
	ON m.Col1 = ISNULL(LOWER(t.Col1) COLLATE Latin1_General_CS_AS, '(N/A)')
LEFT JOIN DB.dbo.T2 s
	ON s.Col2 = ISNULL(LOWER(t.Col2) COLLATE Latin1_General_CS_AS, '(N/A)')
LEFT JOIN DB.dbo.T3 k
	ON k.Col3 = ISNULL(LOWER(t.Col3) COLLATE Latin1_General_CS_AS, '(N/A)')

 
But wait a minute… The collations we use aren’t Case Sensitive, so why use the LOWER() function anyway? And the collations are practically the same for the 2 databases used in the query (“SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS”). But the column collations are both the same (“SQL_Latin1_General_CP1_CI_AS”). SQL Server doesn’t throw an error if I join both sets, and the results aren’t different if I use COLLATE or not. So we learned another thing: the column collation is used in the ON clause of the JOIN. So both functions aren’t necessary because the collations are equal, and thus their wasting our precious CPU cycles…

Now here comes the moment you’ve all been waiting for. How does the execution plan differ? This is what the new execution plan looks like:

So what are we missing? You guessed right! We’re missing this bad boy:

So what does the “Compute Scalar” actually do? According to MSDN:

 

The Compute Scalar operator evaluates an expression to produce a computed scalar value

So that means it calculates a new value for every input column. This calculation can be as simple as a conversion of the value, or a concatenation of multiple values. Most of the time these “Compute Scalar” aren’t a major cost in your execution plan, but it can become an issue that uses a lot of CPU.

But what does this “Compute Scalar” do in this query? If I remove the ISNULL() from the query, it’s still there. It disappears if I remove the COLLATE. So in this case, the “Compute Scalar” is “calculating” a new value for every record. This “calculation” actually is a conversion from 1 collation to the other.

 
But what’s the difference?
If we compare both versions of the query, the modified version has a lot less impact on the server:

And this is only without the LOWER() and COLLATE functions. So with a little bit more effort, you can rewrite a part of the query, maybe add an index, etc. So as you can see, small changes can have a huge impact!

 
Conclusion
A “Compute Scalar” isn’t that hard to understand, but you’ll find it very often in your execution plans. So even though it isn’t the worst performance blocker in the world, it’s an interesting one to get familiar with.

If you don’t use SQL Sentry Plan Explorer yet, download your copy now!

Huge operator costs in execution plan

If you work with SQL Server, you’ll need to look at execution plans sooner or later. Now and in the past, I’ve had the privilege of introducing many of my (former) colleagues to these “works of magic”. But what happens if you can’t trust the plans you’re looking at…?

Say what…?
Last week I was asked to look at a slow running query. The first thing I did was look at the execution plan. It hit me pretty fast that this might be a “less optimized” query:

As you can see, it was a query with a lot of CTE’s and sub-selects, that was build by multiple developers and analysts. No one took the time to review or rewrite parts of the query, but they all build their additions on top of the old version. This isn’t uncommon in most companies, because time is precious and costs a company money. And people often find it difficult to ask or make time for quality control.

But looking a little bit closer, I started noticing that the operators in the execution plan were a little bit too high if you ask me:

This couldn’t be correct! So I asked the help of a life saver, called SQL Sentry Plan Explorer. If you don’t already have it, and are using it, start doing that now! And no, they don’t pay me to say this (but if they want to, I have nothing against that…). The main reason I use Plan Explorer, is that it shows you a little bit more information, and the layout is better then the default execution plans from SQL Server. But what does Plan Explorer show us, if we load the same plan?

It seems that the Plan Explorer shows the right numbers. But how is this possible? After some online searching, I came to the conclusion that I’m not the only one having this issue:

Huge operator cost in estimated execution plan
Query plan iterator cost percentage way off
SSMS execution plan sometimes exceeds 100
Katmai also 2005 graphical plan operator costs exceed 100

But unfortunately, all of these issues are marked for “future release”, and only 1 is from last year. The other connect items are much older. So maybe they will fix it for the next release that is just announced.

But keep in mind, even though the numbers look weird, it doesn’t affect performance.

Calculating Running Totals

When you work with data and database systems, eventually you need to calculate running totals on (for example) product sales or financial data. There are several methods to calculate these amounts. In this post I’ll try to show the pros and cons to the different solutions.

Let’s start with creating the resources for the examples. The most basic example I could think of, is one with only the necessary information: Date and Turnover.

CREATE TABLE Dough
	(Date DATE,
	 Turnover FLOAT)

And then insert some data:

INSERT INTO Dough
	(Date, Turnover)
VALUES
	('2011-01-01', 1000),
	('2011-02-01', 1250),
	('2011-03-01', 1500),
	('2011-04-01', 1750),
	('2011-05-01', 2000),
	('2011-06-01', 2250),
	('2011-07-01', 2250),
	('2011-08-01', 2000),
	('2011-09-01', 1750),
	('2011-10-01', 1500),
	('2011-11-01', 1250),
	('2011-12-01', 1000)

INSERT INTO Dough
	(Date, Turnover)
VALUES
	('2012-01-01', 100),
	('2012-02-01', 125),
	('2012-03-01', 150),
	('2012-04-01', 175),
	('2012-05-01', 200),
	('2012-06-01', 225),
	('2012-07-01', 225),
	('2012-08-01', 200),
	('2012-09-01', 175),
	('2012-10-01', 150),
	('2012-11-01', 125),
	('2012-12-01', 100)

With this resource, we can start on the examples.

Different solutions


When looking at this question, you’ll notice that there are more solutions to return the correct result. The following queries return the same result, but all the solutions are written for a specific version of SQL Server.

SQL 2000
If you’re using SQL Server 2000 (and I certainly hope you don’t have to anymore ;)), you can use the query with the INNER JOIN. This can be used on all SQL Server versions:

SELECT
	A.Date,
	A.Turnover,
	SUM(B.Turnover)		AS RunningTotal
FROM Dough A
INNER JOIN Dough B
	ON YEAR(B.Date) = YEAR(A.Date)
	AND B.Date <= A.Date
GROUP BY A.Date, A.Turnover
ORDER BY A.Date ASC

SQL 2005
In SQL Server 2005 they entered a new join type, called CROSS JOIN:

SELECT
	A.Date,
	A.Turnover,
	SUM(B.Turnover)		AS RunningTotal
FROM Dough A
CROSS JOIN Dough B
WHERE YEAR(B.Date) = YEAR(A.Date)
AND B.Date <= A.Date
GROUP BY A.Date, A.Turnover
ORDER BY A.Date ASC

The example with the INNER JOIN and the CROSS JOIN generate the same execution plan.

SQL 2012
With the release of SQL Server 2012 they handed us (SQL developers) a whole new “bag of tricks”. One of these “tricks” is the window function.

The first time I saw the window function, was at a Techdays NL 2012 session. This session was hosted by Bob Beauchemin (Blog | @bobbeauch). The sessions (T-SQL improvements in SQL Server 2012) is worth watching. Even if you’re using SQL Server 2012 already!

With the window function you can compute and group data, and this is done with the rows you specify.

SELECT
	Date,
	TurnOver,
	SUM(TurnOver) OVER (PARTITION BY YEAR(Date)
						ORDER BY Date ASC
						ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)		AS RunningTotals
FROM Dough

Performance


Seeing all these different solutions for the same question, I (and you probably will too) wonder about the performance of these queries. One very quick conclusion: they all return the same records ;).

When using SET STATISTICS IO, you can see the amount of disk activity generated by your statement. If you run this for the queries above, you will get the following results:

INNER JOIN:

Table ‘Dough’. Scan count 2, logical reads 25, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

CROSS JOIN:

Table ‘Dough’. Scan count 2, logical reads 25, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

OVER:

Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Dough’. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

In the OVER query, you see a table called “Worktable”. This is an “extra object” that is generated by SQL Server because you use the OVER statement.

Conclusion


As shown above, there are several different ways to get to the same result. In this example I didn’t show you the cursor solution. This because it’s a bad practice, a bad performer, and a little bit to preserve my good name ;). If you do want to see this, please leave me a comment, and I’ll add it to this post.

But with every solution you’ll see as much discussion about reasons to use is, as discussions on why NOT to use it. And in this case, you might be bound to a specific SQL Server version, so you can’t use a specific approach.

But if you ask me for my opinion, I’ll go for the last option. Not only because I’ve got the privilege to work with SQL Server 2012 in my daily work, but also because it’s the best performer and you’ll end up with the most readable code.

I’m guessing you have a totally different opinion, so please leave a comment with your ideas and/or approaches to this challenge! Also, comments and questions are also greatly appreciated!

Follow

Get every new post delivered to your Inbox.

Join 50 other followers