Incremental updates with Change Data Capture

When designing a database or ETL process, for example loading your production data into a reporting environment, you always start your design with performance in mind. In the beginning of the project, your scripts and ETL run blazing fast. But after a few months in production, the entire project grinds to a halt. But how do you fix that problem, without a complete redesign of your applications and database? One of the many solutions is an easy one: incrementally load your data into the destination tables.

 
Change Data Capture
Incremental data loading could be a hard nut to crack. It’s not always an option, but it might be a good point to start from. One of the ways to start loading your data incrementally, is by using the keys in your database as a reference. If your table has a column called “Modified Date”, and that is updated every time the record is updated, you could use that. Every night, when the process runs, you just add the records that were modified after the last successful process run. But what if you don’t have that possibility? Change Data Capture (CDC) is an easy way out.

 
CDC is a way to record inserts, updates and deletes on a specific table, without the need of writing the triggers yourself. CDC reads the transaction log, and captures all changes made to the specific table. These changes are stored in the associated change table, that is created by CDC.

Below I’m going to show you how to setup your first table with CDC.If you would like to know more about CDC, this TechNet article is a place to start.

 
Create an example
To show you the basics of CDC, let start with creating a table called TestCDC in the database called Sandbox:

USE Sandbox
GO

CREATE TABLE dbo.TestCDC
  (ID int IDENTITY(1,1) NOT NULL PRIMARY KEY,
	 Descr varchar(50) NULL)
GO

 
Once you’ve created the table, turn on CDC at the database level, by execution the system stored procedure created to do that:

EXEC sys.sp_cdc_enable_db

 
There is also a system stored procedure to enable CDC on the table level. You need to enable CDC on tables manually, and separately for every table you need:

EXEC sys.sp_cdc_enable_table
  @source_schema = 'dbo',
  @source_name = 'TestCDC',
  @role_name = NULL

 
If the SQL Server Agent is running on your machine or server, you’ll see this confirmation (I’ll explain later why SQL Server Agent is needed):

 
If the Agent isn’t running, you’ll see this warning:

 
If you ran the enable table statement, you will see that SQL Server created the system objects needed to track changes in the table:

 
Because CDC uses 2 SQL Server Agent jobs to capture and cleanup the change tables, you need to run the Agent to start the data capture. If the jobs aren’t running, SQL Server won’t capture any changes made:

 
Start data changes
In order to see what happens when you change data, let’s insert some records:

INSERT INTO dbo.TestCDC
  (Descr)
VALUES
  ('This is a description')

INSERT INTO dbo.TestCDC
  (Descr)
VALUES
  ('This is a description too...')

 
And let’s update one of those 2 inserted records:

UPDATE dbo.TestCDC
SET Descr = 'UPD - ' + Descr
WHERE ID = 2

 
Now, let’s check the content of both the original table, and the change table:

/* Original table */
SELECT * FROM dbo.TestCDC

/* Change table */
SELECT * FROM cdc.dbo_TestCDC_CT

 
If you run both queries, you’ll see the resultset below:

 
The records in the CDC change table allow you to update the data in your reporting environment. You could query them yourself, by retrieving all the changes since your last update. You can also use the procedures that return those changes for you, for example the cdc.fn_cdc_get_net_changes_. You can read more about the system function here.

 
Cleaning up after an update
Now that you’ve updated your reporting environment, it’s a wise thing to cleanup your CDC data. You could also drop the records yourself with a DELETE statement. Another option is using the system procedure for that: “sys.sp_cdc_cleanup_change_table”. You can clean your data using the following SQL statement:

DECLARE @Last_LSN VARBINARY(10) =
  (SELECT MAX(cdc.dbo_TestCDC_CT.[__$start_lsn]) FROM cdc.dbo_TestCDC_CT)

EXEC sys.sp_cdc_cleanup_change_table
  @capture_instance = 'dbo_TestCDC',
  @low_water_mark = @Last_LSN,
  @threshold = 5000

 
The query will retrieve the last LSN (Log Sequence Number), and remove everything that happened before that.

 
Cleanup of CDC
If you want to completely remove CDC (because we’re done testing), you can disable it on the table level by running the query below:

EXEC sys.sp_cdc_disable_table
  @source_schema = 'dbo',
  @source_name = 'TestCDC',
  @capture_instance = 'dbo_TestCDC'

 
The statement will cleanup all the objects that were created to enable CDC on that specific table. But the statement will only stop the CDC on the specific table. The fastest way to disable CDC on all tables in the database, is disabling CDC on the database level, by running the query below:

EXEC sys.sp_cdc_disable_db

 
Conclusion
Loading data always takes time, and there are many factors that are important: your database size, your frequency of changes, your ETL process, etc. The time it costs you to move data can be changed by rewriting your process to incremental loads. CDC is one of the many ways to achieve this. It works out of the box, and doesn’t require you to build any process yourself. But maybe your environment needs a custom process to operate the way you want it to. Not every feature in SQL Server is a so called silver bullet, but sometimes it comes darn close to one…

Recursive CTEs: Opening the door to a new dimension

A few weeks ago, someone asked me what a Common Table Expression (CTE) was, and how it works. But writing about a CTE would result in a short blog post. So why not make it a little more interesting, and also write about recursive CTE’s, right?

Since SQL Server 2005 we have the opportunity to use something called a Common Table Expression, or CTE for short. This T-SQL feature allows us to use the resultset of a query, as a source for another statement, just like a physical table. This sounds weird, but let me show you.

 
Creating the resources needed
To explain the possibilities of a CTE, let’s create a simple example with employee data. We’re going to create a table that contains the employee’s ID, name, job title and the employee ID of the manager:

CREATE TABLE dbo.Employees
    (EmployeeID INT,
	 EmployeeName VARCHAR(25),
     JobTitle VARCHAR(50),
     ManagerID INT)
 
 
INSERT INTO dbo.Employees
    (EmployeeID, JobTitle, EmployeeName, ManagerID)
VALUES
    (1, 'CEO', 'Smith', NULL),
    (2, 'Department Manager', 'Winters', 1),
    (3, 'Team Manager', 'Thompson', 2),
    (4, 'Employee', 'Jones', 3);

 
If you select all records from this table, it will look like this:

 
So as you can see, every employee has a manager, except the CEO. He’s at the top of the food chain and no one has a higher position than him. And looking at the “ordinary” employee, you’ll see that he has a manager above him, but no employees beneath him. He’s at the bottom of the food chain so to speak.

But what about the CTE? Let’s write an easy example to show you how to add the employees last names and job titles together, without doing that in your final select statement:

WITH EmployeeList AS
(
    SELECT
		EmployeeID,
        EmployeeName
        + ' - ' +
        JobTitle AS EmployeeNameAndJobTitle
    FROM dbo.Employees
)


SELECT
	EmployeeID,
	EmployeeNameAndJobTitle
FROM EmployeeList;

 
If we analyse the example above, the select query that adds the employees last name and job title together becomes a new resultset that is called “EmployeeList”, and can be used just like a normal table in the query below the CTE. The result looks like this”

 
But remember: the CTE just exists in the scope of your query. This means that you can only run one select statement on the CTE. If you try to run a second select statement, you’ll see that the CTE will no longer exist.

Another thing you can do with a CTE, is directly add the column names to the CTE like this:

WITH EmployeeList (EmployeeID, EmployeeNameAndJobTitle) AS
(
    SELECT
		EmployeeID,
        EmployeeName
        + ' - ' +
        JobTitle
    FROM dbo.Employees
)


SELECT
	EmployeeID,
	EmployeeNameAndJobTitle
FROM EmployeeList;

 
This might be easier to read for you and your colleagues, instead of naming your column in the select with an “AS EmployeeID”. The result is exactly the same as in the example above.

This is just an easy example, but imagine that you can use this to join your aggregated temporary dataset from your CTE, onto another table that you need to build a final resultset.

 
Recursive
But now comes the best part: recursive CTE’s. To stick with this example, what if you want to combine the job title of the employee with the manager above him? This sort of resultsets can be created with a recursive CTE. In the example below, the job title of the manager is added to the employees job title:

WITH HierarchyCTE (EmployeeID, ManagerID, JobTitle, Hierarchy)
AS
	(SELECT
		EmployeeID,
		ManagerID,
		JobTitle,
		CONVERT(VARCHAR(35), '')
	 FROM dbo.Employees
	 WHERE ManagerID IS NULL

	 UNION ALL

	 SELECT
		E.EmployeeID,
		E.ManagerID,
		E.JobTitle,
		CONVERT(VARCHAR(35), H.JobTitle + ' \ ' + E.JobTitle)
	 FROM dbo.Employees E
	 INNER JOIN HierarchyCTE H
		ON H.EmployeeID = e.ManagerID
)


SELECT
	EmployeeID,
	ManagerID,
	JobTitle,
	Hierarchy
FROM HierarchyCTE;

 
This is what the result looks like:

 
What happens in the recursive CTE, is that the first query is used as an anchor, and a loop is executed to add every child-record to the anchor (parent)-record. If there are no child-records left for the parent, the next parent is fetched, and so on.

 
So what can you do with this?
The examples above were just easy examples, but you can imagine the opportunities with this functionality. For example, if you work for an electronics store, you could build a long string of all packages with sub-deliveries (when every phone is send with a screen protector and case for example). Or create a complete overview of contents for recipes (for example, which ingredients are put into a pie). The possibilities are endless…

Get column headers without retrieving data

A few days ago Pinal Dave (Blog | @pinaldave) retweeted an old blog post about retrieving columns without data. Reading this blog post, I started thinking of other ways to achieve this.

One of the downsides of using SET FMTONLY ON is that it’s a session-setting. This means that every query you run in that session, doesn’t return results. This could end up in very confusing situations. You could end up debugging a query which doesn’t return any result, and eventually you discover that this is a result of SET FMTONLY ON. So this may not be the preferred way to get the column headers from a specific query. Also, this option doesn’t return estimated or actual execution plans!

An example of such a query could be:

SET FMTONLY ON

SELECT *
FROM sys.tables

SET FMTONLY OFF

Another way of getting this information, without changing your connection options, is a simple WHERE clause.

SELECT *
FROM sys.tables
WHERE 1 = 0

This query will return the same results as the query with SET FMTONLY ON. And looking at performance, there’s no real difference between the two. I ran statistics on both solutions, and the I/O generated is pretty neglectable.

Conclusion
My personal favorite is the WHERE clause. This is because it doesn’t involve session modifiers, and (if you want to) still returns an execution plan. The SET FMTONLY ON doesn’t return an estimated or actual execution plan.

On the other hand, SET FMTONLY is on the list of deprecated features in future SQL Server versions. So for all new functionality, I’d advise you not to use it.

But maybe the other options suits you more. As always: it depends. Depending on your own preferences and the specific situation. But if you have any other options, please let me know. Post a comment, send me an e-mail, tweet me, whatever you want :).

T-SQL Tuesday #35 – Soylent Green

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Soylent Green”. If you want to read the opening post, please click the image below to go to the party-starter: Nick Haslam (Blog | @nhaslam).



The question of this month was to write down our most horrifying discovery from our work with SQL Server. If you work with SQL long enough, you will encounter some strange situations. One way or the other…

My first experience with SQL Server was at my first official IT job. Back then I worked for an insurance company, and there I was offered the opportunity to become a “Conversion Specialist”. This meant that I visited customers, advised them about their data and our software, and converted and import their data into our main system. When I started the job, I’d never wrote a single T-SQL statement. So the learning curve was pretty steep, but after some sleepless nights of studying, I got the hang of it. And during this job, I encountered my first (of many) horrifying experiences…

In this company, the main application took it’s data from one (!!!) SQL 2000 database. The system contained all the data the company had, and it was a “rebuild” DOS application in Delphi (at that time the company worked with Delphi and .NET). In order to store all of their data into one data model (yeah, I know!), they created a “flexible and dynamic” model… In one table…

The data was stored in one table (it wants to remain anonymous, so from now on we’ll call him “Foo”), and Foo contained more then 200 columns (as I recall correctly). Every time we inserted a record into Foo, SQL Server calmly mentioned that the limit of 8000 bytes (max. per record) was exceeded. How they fixed that? I still haven’t got a clue…

Every “object” stored in Foo contained properties, or a collection of properties (which obviously ended up in 1 record for each item in the collection). But as I mentioned, they were stored “dynamically”. So if you wanted to retrieve an object “Tree”, then you needed columns 15, 18, 20 and 52. When retrieving an object “Bird”, you needed columns 15, 18, 25 and 2550 for the same properties.

But I must honestly admit: I left the company after six years with tears in my eyes. They offered me a lot of opportunities, and the colleagues were awesome!

Another example I encountered on a production environment (at a different company), was an issue with currency calculations. The product data and the currency rates were loaded from 2 different data sources. To combine these in one record (and calculate the turnover), they used a CASE statement in the script, that ran in the Data Warehouse. But when I took over the Data Warehouse, they forgot to mention one thing…

If a product was sold for 100 Mexican pesos (with current exchange rates this is about € 6.00 or $ 7.80), and no exchange rate from pesos to dollar was present, the script ended up in the ELSE clause. This clause multiplied the amount with 1, “not to mess up the data”. And without anyone noticing, 100 Mexican pesos turned into $ 100! It actually took a while for people to notice this (including me!).

And I’m probably not the only one with these experiences, so I’m glad Nick asked us to share them with the rest of the #SQLFamily! And if you want to read more of the horrifying experiences, let me know. I might write another blog post about this, because this feels like therapy! And it’s another change for me to make Rob Volk (a great guy, and great example to me!) proud, by using the “Evil” tag again! ;)

T-SQL Toolbelt – Search for objects in databases – V 2.1.0

A few weeks ago, I received a message from an old colleague and friend Eric (Blog | @saidin). He wanted to know if I had a query in my (and I quote) “magic bag of SQL tricks”, to search through object in SQL server. The company he works for (he is a software developer, and independant consultant) wanted to change all stored procedures, that contained functionality to calculate VAT (Value Added Tax).

I remembered that a few years ago, I needed that same functionality, and I wrote a few scripts to search for specific dependencies in views and stored procedures. Next to a query that gets information from
sys.tables and sys.columns, I used these queries to get view and stored procedure content:

SELECT *
FROM sys.syscomments
WHERE text LIKE '%<SearchTerm>%'


SELECT *
FROM INFORMATION_SCHEMA.VIEWS
WHERE VIEW_DEFINITION LIKE '%<SearchTerm>%'

The first query uses information from sys.syscomments. Which, according to MSDN:

“Contains entries for each view, rule, default, trigger, CHECK constraint, DEFAULT constraint, and stored procedure within the database. The text column contains the original SQL definition statements.

The seconds query uses INFORMATON_SCHEMA, that contains SQL Server metadata (see MSDN article):

An information schema view is one of several methods SQL Server provides for obtaining metadata.

The VIEWS view (a view on all views?) returns a row for each view that can be accessed by the current user, in the current database. So this means that the view only returns rows for objects that you have permissions on.

Then I decided to write a script that does this in one query, and more… When working on this script, I thought about adding more functionality to it. Why wouldn’t you want to search for primary or foreign key columns, triggers, functions, etc? But adding more information to the resultset often means that the overview is lost along the way. Because of that I created a switch system. By setting a few bits you can turn on what you need to see, and turn off what you don’t want to see. This way the result is kept clean, and you’re not bothered with unnecessary information.

One of the issues I ran into is how to search for a specific string. Because I wanted to let the user enter the searchterm once, I needed to use a variable. But if you use a variable, and you ad a wildcard (%) as the first and last character, the query returns all objects. It has the same effect as returning all objects, instead of returning objects based on a specific searchterm.

So because of this, I used dynamic SQL to search through the list of objects. In dynamic SQL it’s possible to work with wildcards in a like statement. The only thing I needed to do is change one more table from memory to physical temp table, because it’s used in the dynamic SQL. Apparently dynamic SQL can’t use a memory table (DECLARE @Object TABLE) as a datasource.

So this is what I could build in the past few weeks. The only problem is that fixing issues that I found resulted in adding more and more new functionality to the script. With that in mind, I want to create a toolbelt with useful SQL scripts for myself. But of course, I want to share it with the community, so they can use it if they like.

So the upcoming weeks, I hope to build as much functionality in this script as I can. There are still a few wishes for the future, and a few features that I want to build in, just because they’re cool! For every new version, I will write a blog with releasenotes, so you’re aware of the changes in the script.

For further questions and wishes, please contact me via twitter or this blog. I’d love to hear your ideas and wishes, so that I can implement it in the script!

You can download the script by clicking on the image below.

Downloads

Version 2.1.0:


Error: Permissions denied mssqlsystemresource – configurations

Last week I encountered a strange issue on a production SQL Server. We weren’t able to connect to some of them. If we tried to connect with a normal Windows NT (Active Directory) or SQL account, we got the following error (I recreated the error locally):

Message: The SELECT permission was denied on the object ‘configurations’, database ‘mssqlsystemresource’, schema ‘sys’. (Microsoft SQL Server, Error: 229)

For further information about the mssqlsystemresource database, please read my previous post about this database.

But in my case, there was nothing wrong with this Resource database. After all, no other database was unavailable or corrupt. So I needed a little bit more research. After a while I found out that the issue is coming from a security setting.

We allowed an external company of Data Warehouse consultants (KVL) to access our production SQL Server. They needed to create a Snapshot, and the person responsible for making this possible created 2 Stored Procedures (SP’s) for this task. One SP to create a Snapshot, and one to drop the Snapshot. These SP’s are stored in the Master database.

But because he wanted them to only access the SP’s, the created a user for this (again, created this locally):

And he added a server mapping for the user:

After that he tested it, and it worked like a charm! He e-mailed me the location of the SP’s and the names, so I could send them to our consultants. Then he added one more thing to the User Mappings without any of us knowing:

When the consultants tried to create the Snapshot, they couldn’t get it to work. After some research I found out that the User Mapping on the Master database was set to db_denydatareader. As you all know, deny permissions always overrule access permissions. In this case this worked against us.

So if you ever encounter this issue, please take a look at your security settings. If all of your other databases are still accessible, the error is coming from a lack of permissions. You’ll encounter this issue most of the time when a user (NT or SQL user) is a member of 2 separate groups with different permissions.

Row_Number: Unique ID in select statement

Last week I was asked by a colleague, Sander (Blog | @SanderKooij), “What’s the easiest way of adding a unique identifier (or row ID) to a result set?”. That’s an interesting question. There are several ways to do this. For example, you could insert the resultset into a temp table (physical or memory temp table), with an extra column (Identity). But what if you want to do this in a Select statement itself? Here’s a solution.

If you use the scripts I’ve added to this post (check Code Samples), you’ve created a table with country abbreviation codes. This table contains the data as shown below:

If you select the data, and want to add a record ID to your result set, you can use the following script:

SELECT
	DefaultAbbreviation,
	CountryProper,
	CountryNumber,
	ROW_NUMBER() OVER(ORDER BY DefaultAbbreviation ASC) AS RowNumber
FROM CountryAbbreviationCodes

The statement above uses the ROW_NUMBER built-in function to generate a row number for the result set. The ORDER BY clause in the functions is used to tell SQL Server what column(s) to use, to create a row number. In this case, the result set is ordered by the abbreviation column.

But what if you want to select the data with a specific row number? If you try to use the ROW_NUMBER function in the where clause, you get the following error:

“Windowed functions can only appear in the SELECT or ORDER BY clauses.”

So you need another solution. From SQL Server 2005 onwards we can use a Common Table Expression (CTE). With a CTE you can use a select statement as a table. So if you want to return the rows that are numbered 50 through 60, you can use the following query:

WITH OrderedCountries AS
(
	SELECT
		DefaultAbbreviation,
		CountryProper,
		CountryNumber,
		ROW_NUMBER() OVER(ORDER BY DefaultAbbreviation ASC) AS RowNumber
	FROM CountryAbbreviationCodes
)

SELECT
	DefaultAbbreviation,
	CountryProper,
	CountryNumber,
	RowNumber
FROM OrderedCountries
WHERE 1 = 1
AND RowNumber BETWEEN 50 AND 60

The result of this statement is the 10 rows we requested. So with a CTE we can use the ROW_NUMBER function to return specific rows from a table.

With the ROW_NUMBER functions, you can also delete duplicate records from your tables. If interested, post a comment, mail or tweet me, and I might write a blog post about it.


Code samples:
CreateSources.sql
SelectStatement.sql
CTESelectStatement.sql

SQL Joins – The Differences

In SQL Server you can find several joins. In this post I will try to explain the diffences between them, and how you can use them in your advantage.

Joins
There are several different types of joins. Some of them look different, but are actually the same. For example, a Left Join is the same as a Left Outer Join. This also counts for Right/Right Outer Joins. And if you ever encounter a statement with just Join in it, this is converted by SQL Server to an Inner Join.

(Inner) Join
An Inner Join takes the results from both tables in the join, and combines data that matches. All data that doesn’t match the ON clause, isn’t shown in the result set.

Left (Outer) Join
A Left Join takes the 2 tables used in the join, and takes all records from the left table (1st table in your join), and matches and adds data from the right (2nd table in your join). All data from the 2nd table that doesn’t match the ON clause is filtered out.

Right (Outer) Join
A Right Join takes the 2 tables used in the join, and takes all records from the right (2nd table in your join), and matches and adds data from the left (1st table in your join). All data from the 1st table that doesn’t match the ON clause is filtered out.

Full (Outer) Join
A Full Join takes the records from both tables, and matches all the data. The data that doesn’t match is also shown, except with NULL values in the columns that are missing on the other side.

Left (Outer) Join – Right Key Is NULL
With this Join (can be found in code samples, download below), you will get all the results that are not included in the Left Join.

Right (Outer) Join – Left Key Is NULL
With this statement (can be found in code samples, download below), you will get all the results that are not included in the Right Join.

Full (Outer) Join – Both Keys Are NULL
With this join you get all the data that isn’t included in the Full Join result set.

Cross Join
The Cross Join is a “special” Join. This Join type selects all the possible combinations it can. This sounds weird, but in the image below and in the code samples, it’s much clearer.


Code samples:
CreateSources.sql
SelectStatements.sql

Count records in all database tables

Every ones in a while you want to know the record count of all objects in your database. If you have 10 objects or less in your database, you can choose to type out the statements yourself, but once it’s more then that it gets quite annoying.

Because I needed this for a production issue (make sure that all objects contain records after import) I wrote a cursor to query all the database objects, and count the content of it:

USE Sandbox

--==========================================================================================
DECLARE @TableName VARCHAR(50) = '',
        @SQLStatement VARCHAR(MAX) = ''
--==========================================================================================
DECLARE TableCursor CURSOR FOR
SELECT name
FROM sys.tables
WHERE name NOT IN ('sysdiagrams')
AND name NOT LIKE 'Temp%'
AND name NOT LIKE '%ReturnGrid%'
ORDER BY name ASC
--==========================================================================================
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
--==========================================================================================
WHILE @@FETCH_STATUS = 0
BEGIN
       SET @SQLStatement = @SQLStatement + 'SELECT ''' + @TableName + ''' AS TableName,
                                            COUNT(*) AS RecordAmount FROM ' + @TableName
       SET @SQLStatement = @SQLStatement + ' UNION ALL '

       /* Print Result */
       PRINT @TableName

       /* Select SQL Statement */
       --SELECT @TableName, @SQLStatement

       FETCH NEXT FROM TableCursor INTO @TableName
END
--==========================================================================================
CLOSE TableCursor DEALLOCATE TableCursor
--==========================================================================================
SET @SQLStatement = LEFT(@SQLStatement, (LEN(@SQLStatement) - 10))
--==========================================================================================
/* Print  */
PRINT '--=========================================================================================='
PRINT 'Executing Statements'

EXEC(@SQLStatement)
--========================================================================================== 

 
The script uses a cursor to query the object names from the sys.tables, and with that data it creates a dynamic SQL query. That query counts the content of all objects. The sysdiagrams (MS Shipped table), Temp and ReturnGrid tables are excluded from the cursor.

For the big databases I build in a PRINT statement, so you can keep track of the progress while running the query. If the dynamic SQL statement is executed, it will show the text “Executing Statements” via a PRINT statement in the bottom of the query. After that you will see the results.

Follow

Get every new post delivered to your Inbox.

Join 41 other followers