Running maintenance on Azure SQL databases

This article was recently published on dev.getroadmap.com:

 

To keep your data healthy and topfit, we all know you need to run something called database maintenance on your databases. This prevents things like performance problems or unrecoverable data from happening, and that is sort of the core business of DBA’s. And when we look at how this can be performed on a regular basis, an on-premise instance is a bit easier to maintain than an Azure SQL database. That’s why I want to share our experience with you on that, hoping that it can save you some time when you start implementing this.

 
Platform setup
In order to get a better understanding of why certain choices were made, I want to share a very basic version of the setup of our environment.

Our platform consists of 2 mayor parts: a private cloud part (that we see as “on-premise”), and an Azure part. Those 2 parts combined make our platform, and this platform hosts all of our services, applications, databases, etc. The development approach we use is everything should be designed with cloud-first in mind, but only if it’s the right tool for the job, and with the exclusion of some data.

The databases we use in Azure are all what Microsoft calls “Azure SQL databases”. This means that there are no virtual machines of any kind are running on our Azure-part of the platform, also known as DBaas (Database as a Service).

When I draw this platform, and only focus on the data part, it will look something like this:

 
One of the advantages of this setup is that we can leverage the power of both parts of the platform. As you’ll see later on in this blog, there are some things that we (need to) run from the on-premise instances and some things fully on Azure.

 
Big shoutout to Ola
Before I’m going into detail, I want to give full kudos to Ola Hallengren (Website | @olahallengren). He has spend a lot of his time to build a SQL Server Maintenance Solution that is completely free for everyone to use. And he did such an excellent job a lot companies (also huge companies) use his solution to run maintenance tasks on their databases.

None of the scripts below are written by me, but only small changes are made in order to make things more clear when the solution is deployed to an environment. The original scripts can be downloaded via the download page on Ola’s website.

 
Backups & Integrity check
Taking backups of your database and making sure there is no corruption in the datafiles is an essential part of the maintenance solution written by Ola. But because Azure SQL databases have a build-in maintenance solution (for example backups: full backups weekly, differentials hourly, and transaction log every 5 minutes, source), we don’t need to worry about that ourselves.

 
Index maintenance & Update Statistics
Indexes and statistics are the core of your performance-based maintenance processes. These make sure your queries run fast, and should provide you with a stable and predictable performance. This is especially needed on an Azure database, since you can’t monitor it like you would with an on-premise database.

Because Azure SQL databases are run on shared hardware that you can’t monitor yourself, Microsoft provides us with a number of different performance counters that we can use to check the status/health of our databases. The most important counters are CPU usage, Data IO, Log IO and DTU usage (a combination of the previously mentioned counters). The DTU counter is the most abstract (to me at least), because it’s explained by Microsoft as:

 

The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions.

For example, a Premium P11 database with 1750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs.

 
And for me, who is was used to monitoring physical hardware, that is a bit of a different approach when digging into performance-related issues. And it’s not that index and statistics maintenance isn’t important when you work on a on-premise database, but it’s a slightly bigger challenge to monitor the direct effects of a index rebuild or statistics update.

But because every Azure SQL database is a contained database, you need to deploy the stored procedures from Ola’s solution to every single database. So to keep it clear for everyone which table and stored procedures belong to the maintenance solution, I’ve changed Ola’s scripts slightly to create all objects in a specific schema named “dba”. So first of all, let’s create the schema:

CREATE SCHEMA dba
GO

 
And then create the used to log all of the maintenance commands and their outcome:

CREATE TABLE dba.CommandLog
	(ID INT IDENTITY(1,1) NOT NULL CONSTRAINT PK_CommandLog PRIMARY KEY CLUSTERED,
	 DatabaseName sysname NULL,
	 SchemaName sysname NULL,
	 ObjectName sysname NULL,
	 ObjectType CHAR(2) NULL,
	 IndexName sysname NULL,
	 IndexType TINYINT NULL,
	 StatisticsName sysname NULL,
	 PartitionNumber INT NULL,
	 ExtendedInfo XML NULL,
	 Command NVARCHAR(MAX) NOT NULL,
	 CommandType NVARCHAR(60) NOT NULL,
	 StartTime DATETIME NOT NULL,
	 EndTime DATETIME NULL,
	 ErrorNumber INT NULL,
	 ErrorMessage NVARCHAR(MAX) NULL)
GO

 
Now that these are created, you can create the stored procedure that execute the actual index and statistics maintenance. They are too long to post here as code snippet, but you can download the CommandExecute script here, and the IndexOptimze script here.

But because we want to schedule these procedures later on, I decided to create an additional stored procedure in every database, that is deployed alongside the maintenance objects:

CREATE PROCEDURE dba.ExecuteMaintenance
AS


EXECUTE dba.IndexOptimize
 @Databases = '<Insert database name>',
 @FragmentationLow = 'INDEX_REORGANIZE',
 @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30,
 @UpdateStatistics = 'ALL',
 @LogToTable = 'Y',
 @Execute = 'Y'

GO

 
This allows us to run maintenance with specific options on the different databases.

Scheduling
After this was all set up, I needed to come up with a way to run these procedures on a schedule. And as you might know, Azure SQL databases don’t have a SQL Server Agent so that’s were our on-premise platform comes in handy. Just for this I created a new virtual machine in our private cloud, and installed SQL Server on that machine to utilize the SQL Server Agent. This server (operations server) runs all of the scheduled operational jobs (including maintenance and some monitoring jobs) on our Azure environment.

But to run a proces from this operations machine on one of our Azure databases I needed to create a linked server first:

EXEC sp_addlinkedserver 
 @server=N'ServerName__DatabaseName', 
 @srvproduct=N'Azure SQL Db',
 @provider=N'SQLNCLI', 
 @datasrc=N'<SERVERNAME>,1433',
 @catalog='DatbaseName';
 GO

EXEC sp_addlinkedsrvlogin 
 @rmtsrvname = 'ServerName__DatabaseName', 
 @useself = 'FALSE', 
 @locallogin=NULL,
 @rmtuser = '<USERNAME>',
 @rmtpassword = '<PASSWORD>'
 GO

 
And all that’s left now is to create a SQL Server Agent job, that executes the “ExecuteMaintenance” stored procedure on the Azure database:

EXEC [<LinkedServerName>].<DatabaseName>.dba.ExecuteMaintenance

 
The reason I choose this approach is that I didn’t want to lose any time on figuring out how the Azure Scheduler or Automation works, at the moment I was implementing the maintenance solution.

But if you want to have a good resource on that, please check out Pieter Vanhove (Blog | @Pieter_Vanhove) blog, because he wrote a great blog post about that.

 
Conclusion
When I look at how this solution is set up, I’m the first one who admits that this isn’t a perfect or ideal solution. But in the end, this gives the rest of the team a clear solution when they need to start or restart the maintenance process: Just log in to the operations server, start the SQL Server Management Studio (SSMS), open the SQL Server Agent jobs, find the job associated with the database they want to run the maintenance on, and that’s it. But for future scalability and maintainability, we might need to implement another solution to do this.

Where to find backup- and restore history

Interesting questions don’t always need to be hard to answer. Earlier this week I was asked “How can I see what backup was restored on environment X?”. The answer is as interesting as the question…

 
Backup
To see what backups are taken, you can take a look at the history of your maintenance job (depending on whatever type you use in your environment). But there is another way: just ask SQL Server.

First, let’s create a new database named Sandbox (before you run the scripts in this blog post, change or create the directories that are used!):

USE master
GO

CREATE DATABASE Sandbox
	ON PRIMARY (NAME = N'Sandbox', FILENAME = N'C:\Database\Sandbox.mdf', SIZE = 4096KB, FILEGROWTH = 1024KB)
	LOG ON (NAME = N'Sandbox_log', FILENAME = N'C:\Database\Sandbox_log.ldf', SIZE = 1024KB, FILEGROWTH = 1024KB)
GO

 
Before we start to backup this database, I want to make sure the information I see is only for this backup. In order to do so, I’m going to cycle the SQL Server error log, and clean all my backup history from msdb. You can do that by running these statements:

EXEC sys.sp_cycle_errorlog
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date =  '2016-01-01'

 
By setting the “@oldest_date” to a date in the future, all backup history will be deleted.

Now that we’ve done that, let’s take a backup of our database:

BACKUP DATABASE [Sandbox]
 TO DISK = N'C:\Database\Backup\Sandbox_2010828_0832.bak' WITH STATS = 10;
GO

 
The backup will complete in an instant, because there is no data to backup:

 
The result of the backup can be found in 2 locations. The first one is the SQL Server error log:

EXEC xp_readerrorlog

 
If you search in the error log you can find the exact time the backup was taken, the amount of pages that were processed, the location of the backup file, etc (image clickable for larger version):



 
The SQL Server error logs are also available from the SQL Server Management Studio (SSMS). If you connect to your instance with the object explorer, and navigate to Management -> SQL Server Logs, you can find the same information.

The second location you can use to retrieve this information is msdb. The msdb contains the history of your backups:

 
You can retrieve this information by running a query on the backup-tables:

SELECT
	B.user_name,
	B.database_name,
	M.physical_device_name,
	B.backup_start_date,
	B.backup_finish_date,
	CASE B.type
		WHEN 'D' THEN 'Database'
		WHEN 'L' THEN 'Log'
	END AS BackupType
FROM msdb.dbo.backupset B
INNER JOIN msdb.dbo.backupmediafamily M ON M.media_set_id = B.media_set_id

 
Restore
The information for restored backups can also be found in the same 2 locations. Let’s take a backup of the Sandbox database first:

USE master
GO

RESTORE DATABASE Sandbox
FROM DISK = N'C:\Database\Backup\Sandbox_2010828_0832.bak' WITH REPLACE, STATS = 10
GO

 
If we look at the SQL Server error log again we can find the restore time, used backup, etc:



 
We can also query the msdb again. But this time, we need to use the restore-tables:

SELECT
	destination_database_name,
	user_name,
	restore_date,
	destination_phys_name
FROM msdb.dbo.restorehistory H
INNER JOIN msdb.dbo.restorefile F ON F.restore_history_id = H.restore_history_id

 
This shows you all recent restores on your instance:

 
Conclusion
Being able to retrieve the information you need to determine which database was backed-up or restored can help you solve some weird issues. In my case, it explained why a certain record wasn’t restored on environment X: it was inserted after the backup was taken. Issues like that can be quite hard (or even impossible) to find without information about backups and restores.

Source control: Using Visual Studio Online in SSMS

As database professionals, we’re all aware of the importance of backups. We make sure the backup process runs as expected, and (hopefully) we restore a backup every now and then to test if the restore process works. But what about source control for your scripts? Most of us don’t use source control in our daily job. But source control is a much a part of the backup process as the actual database backups.

So looking for an easy way out, I focused on Visual Studio Online (VS Online). This is a free online source control system, that you can use once you created an account. You can login on the website with your Microsoft Live account, enter some information (like a username, etc), and you’re ready to go!

But how do you go from writing a query in SQL Server Management Studio (SSMS), to checking in your .sql code files into VS Online? Let’s take a look.

 
Installing TFS tools
In order to use VS Online, you need to install some extra things on your machine. The first thing you need to download is the “Team Explorer for Microsoft Visual Studio 2013”. This installs the team explorer, that you need to get things from and check thing into VS online. This install requires a reboot, so please take that into account!

The second and last installation is the “Microsoft Visual Studio Team Foundation Server 2013 MSSCCI Provider”. This installs the provider, that takes care of the communication with VS Online.

For this installation, it’s important that you download the right version. If you’re running the 32-bit version of SSMS, you need to download the 32-bit version of the provider. If you install the 64-bit version, it won’t work (and trust me on this, I’ve made that mistake before!).

 
VS Online
At this point, I’m assuming you have an account for VS Online. If you log in to the website, and you go to your account url ([AccountName].visualstudio.com), you see the “Create your first team project” page. On this page, you need to create a project before you can check in any files:

 
Configure source control in SSMS
The next step is to configure SSMS, to use the source control provider you just installed. After the installation, you get a new menu in SSMS for source control:

In this menu, click on “Open from Source Control”. In the window that opens, click “Servers…”:

Click “Add…” to add a TFS server or URL:

Now you need to enter your account URL in the textbox:

Once you’ve done that a login screen pops up. Log in with your Microsoft Live account (the same you used to create your VS Online profile), and you’re authenticated:

At this point, your source control is added, and you’ll see it in the overview screen:

If you close the windows that are shown, you end up in your source control project overview:

 
Creating a SQL Server Scripts project/solution
Now that we’ve installed the source control providers, it’s time to create our SQL Scripts project. In SSMS, click on File => New => Project (or Ctrl+Shift+N), and choose “SQL Server Scripts”:

Now that you have a project, it might be handy to show what’s in the solution (if you don’t have that on your screen already). You can open the “Solution Explorer” by clicking on View => Solution Explorer (or Ctrl+Alt+L).

 
Checking in your solution
If you want to check-in your solution or project, you can right-click it in the Solution Explorer, and choose “Add Solution to Source Control”:

 
Another way of doing it, is by clicking on File => Source Control => Add Solution to Source Control. In the next window you are asked for the source control server (click okay if your VS Online URL is selected), and you see the project explorer:

Click on “MySQLProject”, and click OK. Now the check-in screen pops up, where you can “tag” your check-in:

Congratulation! You just checked in your first code in VS Online!

 
Pitfalls and difficulties
In all honesty, there are some downsides in using this, but those are mainly issues in SSMS. For example, you can’t create folders in your SQL Script project. You have to work with the default “Queries” folder you get by default. So there’s no chance you can add a “Finance”, “Maintenance” or “DBA” folder to your project for example.

A work-around for this is creating multiple projects in the solution. So you’ll end up with 3 projects in 1 solution. The nice thing is that you can check-in or check-out 1 single project in your solution, or you can just work on the entire solution at once. This gives you the possibility to work on the solution on your own, or with a group of people.

Another pitfall is the process. If you’re not used to working with queries in source control, it’s easy to forget to check-in your changes at the end of the day. But that’s just something you need to get used to, and need to deal with in your own way. If you’d like to check-in multiple times a day, go ahead and do that. Do you want to check-in your changes at the end of the day only, it’s fine as well.

 
Conclusion
Using source control is great, especially when you have a lot of scripts you need to maintain. I’ve tried to create zip-archives, version numbering my .sql files, different directories, etc. But there’s always a moment that you forget to save your script in an archive, or your computer crashes, and all your work is gone. Using source control prevents these issues (if you use it as intended of course!). VS Online is a really good source control platform (it’s basically TFS online), and it’s free for use. One of the mayor advantages is that VS Online / TFS is fully compatible with SSMS and Visual Studio (which you both use as database professional).

Are you still skeptical? Maybe you should just try it out for a week, to see if it helps you in your daily job. And you know what, maybe you’ll like it. And even better: maybe it’ll someday save your life!

Speeding up your backups

Last week, I watched the SQLRally session of Pieter Vanhove (Blog | @Pieter_Vanhove), where he talked about Advanced Backup and Restore. He mentioned striped backups (taking a backup of a database, divided over a number of files). After watching the video, I started to wonder what made the biggest difference: the number of files, the number of disks, compression.

 
Test setup
In order to test this, I restored a copy of the AdventureWorks2012 database, that you can download from msftdbprodsamples.codeplex.com. But because this database is only 200MB, taking a backup would only take a second. In order to make this a bit more interesting, I used a script Adam Machanic (Blog | @AdamMachanic) wrote. This script creates 2 new objects in the AdventureWorks database (dbo.bigProduct and dbo.bigTransactionHistory), which results in a database of 2.8GB. You can download the script here.

 
What matters most?
In order to test the differences in performance, I’ve tested multiple combinations:

– Multiple files on single disk
– Multiple files on 2 disks
– Both options above, with and without backup compressions

After running all the different tests, I’ve added the results to an Excel sheet:

 
The results you see are in milliseconds. The analysis on these numbers is a bit difficult, so let’s put these numbers in a graph:

 

 
As you can see, the number of files (when looking at a minimum of 2 files) isn’t the biggest difference. The number of disks, and compression vs no compression make the biggest difference. In my case, I tested it on 2 SSD’s, but you may have more disks in your server. Or better yet, you are able to take backups on multiple LUN’s on your storage, and the LUN’s use multiple disks. The more spindles you can use, the faster your backup will become.

But there’s also a downside to this. When you want to restore a backup, you need all the files to do that. So when you restore a normal backup, you only need 1 file. If you’re going to stripe your backup over 16 files for example, you need all 16 files to perform a restore. So basically, the chance of having a corrupt backup file is 16x as high, compared to a normal (single file) backup. For every advantage, there’s always a disadvantage…

If you want to read more about backups, don’t forget to check out these blog posts:

Julie Koesmarno: On sabbatical
Mickey Stuewe: Transaction Log Backups for the Accidental DBA
Chris Yates: Backups – They Are Needed, Who Knew?

Restoring or Moving a database with CDC enabled

When you have CDC enabled on your database, and you want to move it on another instance or restore it somewhere, you need to take some extra steps to make sure CDC is transferred correctly. When asking myself the question how this works, I decided to do some testing.

 
Create test resources
In order to test this, we need to create some test resources. I’ve created a new database, added a table and enabled CDC:

CREATE DATABASE [CDC_DB]
 ON PRIMARY
(NAME = N'CDC_DB', FILENAME = N'C:\Databases\#Backup\CDC_DB.mdf')
 LOG ON
(NAME = N'CDC_DB_log', FILENAME = N'C:\Databases\#Backup\CDC_DB_log.ldf')
GO

USE CDC_DB
GO

CREATE TABLE dbo.Customer
  (CustomerID INT CONSTRAINT PK_Customer PRIMARY KEY IDENTITY(1,1),
   FirstName VARCHAR(50),
   LastName VARCHAR(50))
GO

EXEC sys.sp_cdc_enable_db
GO

EXEC sys.sp_cdc_enable_table
  @source_schema = 'dbo',
  @source_name = 'Customer',
  @supports_net_changes = 1,
  @role_name = NULL,
  @index_name = 'PK_Customer'
GO


INSERT INTO dbo.Customer
  (FirstName, LastName)
VALUES
  ('John', 'Doe')
GO

Now that we have the resources, we can take a backup of the database:

BACKUP DATABASE CDC_DB TO DISK = 'C:\Databases\#Backup\CDC_DB.bak'

 
Restoring the backup
To show you the default restore doesn’t work, let’s restore the backup next to the original database:

RESTORE DATABASE CDC_DB_RESTORE FROM DISK='C:\Databases\#Backup\CDC_DB.bak'
WITH
   MOVE 'CDC_DB' TO 'C:\Databases\#Backup\CDC_DB.mdf',
   MOVE 'CDC_DB_log' TO 'C:\Databases\#Backup\CDC_DB_log.ldf'

If we now try to select data from the change table of CDC, we get an error message:

 
But CDC was in the old database, so why wasn’t it restored? If you look for a solution, Books Online / MSDN doesn’t say anything about this. But luckily other community members blogged about this before (see this and this article). There is an option you can add to your restore:

USE master
GO

DROP DATABASE CDC_DB_RESTORE
GO

RESTORE DATABASE CDC_DB_RESTORE FROM DISK='C:\Temp\#BackupTest\CDC_DB.bak'
WITH
   MOVE 'CDC_DB' TO 'C:\Temp\#BackupTest\CDC_DB.mdf',
   MOVE 'CDC_DB_log' TO 'C:\Temp\#BackupTest\CDC_DB_log.ldf'
, KEEP_CDC
GO

 
If you add “KEEP_CDC” to your backup statement, it will also restore CDC. But that’s not your only option.

 
Detach and Attach

Another option you have is to detach your database, copy the files to another location, and attach the database again. Let’s detach the database first:

USE master
GO
ALTER DATABASE [CDC_DB_RESTORE] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
EXEC master.dbo.sp_detach_db @dbname = N'CDC_DB_RESTORE'
GO

 
Now that’s done, we can copy the files to another directory if needed. You could also just attach it on the same instance:

USE master
GO
CREATE DATABASE [CDC_DB_RESTORE] ON 
( FILENAME = N'C:\Temp\#BackupTest\CDC_DB.mdf' ),
( FILENAME = N'C:\Temp\#BackupTest\CDC_DB_log.ldf' )
 FOR ATTACH
GO

 
This also allows you to use CDC like nothing happened. The database is still CDC enabled, the change tables are there, and the captured data is still in the change table. But there’s one thing missing when you move your CDC database to another instance: the capture and cleanup jobs.

 
CDC Jobs
So how do we recover the jobs? Script them from one instance, and create them on the other instance? There’s a better way to do that. You can just run the statement below to add the jobs to your instance:

USE CDC_DB_RESTORE
GO
EXEC sys.sp_cdc_add_job 'capture'
GO
EXEC sys.sp_cdc_add_job 'cleanup'
GO

 
And if you move your database, you can also clean your instance, and run the statement below to remove the leftover jobs:

USE CDC_DB_RESTORE
GO
EXEC sys.sp_cdc_drop_job @job_type = N'capture'
GO
EXEC sys.sp_cdc_drop_job @job_type = N'cleanup'
GO

 
It is possible, but…
Moving or restoring a database with CDC gives you a few challenges, and even though there are some issues, it’s not impossible. But before you start, you need to think about the approach you need to take, in order to move/restore a database without losing CDC. Depending on your environment and possibilities, you might want to use a specific option, or perhaps can’t use the option you would like. There is a way around, but I would recommend you to test it first, before you run this on production databases!

Backup and relax?

Keeping a good backup strategy is a must for every DBA and database developer. But just creating a backup isn’t enough. Perhaps you don’t have enough storage to store the full backups of your database for over a week. Or taking a full backup of a database takes so long, it’s only possible on weekends. So what are your options?

 
RPO and RTO
Your whole backup strategy starts by determining the RPO (Recovery Point Objective) and RTO (Recovery Time Objective). The great Brent Ozar (Blog | @BrentO) wrote a great blog post about these terms, and explains what they mean.

Basically it means that you need to determine what maximum data loss is allowed, and from there you start creating a backup strategy. So how do you determine these? This is how RPO and RTO look like if you visualize them:

 
Storage
Another thing you want to consider is the storage available for your backups. Most of the time the backups will be stored on a NAS (Network-attached storage), and not on the local server, so storage isn’t a big issue in that case.

I’ve also seen companies that created the backup on the local server, and after completion copied it to a network location. In my opinion it’s only one more dependency that you could prevent, but other than that it’s a valid option.

 
Backup types
SQL Server supports multiple backup options. They all have their pros and cons, and give you the ability to create a backup strategy that fits your needs. In this blog post, I’m assuming the database that we work with is created as a full recovery model database.

 
Full backup
A full backup takes a backup of the entire database. With this backup file you’ll be able to recover the entire database, without the need of extra log files. Creating a full backup can take more time, depending on the size of the database. Let’s visualize this with an example:

 
Looking at the example you’ll see that every night a full backup is created. But on Friday night 8 PM the database crashes, and we need to recover from backup. The last full backup was taken 20 hours ago, so those 20 hours of data changes are lost.

 
Differential backup
If you have less time to spend on backing up your database every night, one of your options is to take a differential backup. A differential backup only backs up data that is changed since the last full backup. A differential backup can’t be created without taking a full backup first. If you try to create it without a full copy of the database, SQL Server will throw an error:

 
When you create a full backup, a backup chain is started. This means that SQL Server registers which LSN (Log Sequence Number) was added to the last backup. When you take the next backup, the backup will contain all transactions from the last LSN of the previous backup until the time of your new backup.

A backup chain can’t be started with a differential backup. Also, when you want to restore a differential backup, you need the full backup it’s based on. To put this into perspective, look at the visualization below:

 
At midnight on Monday we create a full backup of the database. Every other day we create a differential backup at midnight. On Friday at 8 PM the database crashes, and we need to restore a backup. All we need is the full backup from Monday morning, and differential backup 4. Although it takes less time to create a differential backup, you see that this hasn’t helped you much. You still lost 20 hours of data.

 
Transaction Log backup
The last major type of backup is a transaction log backup. This backup contains all transactions that were executed after the last full or differential backup were created. This gives you the opportunity to perform the so called “point-in-time recovery”.

Just like the differential backup can’t be created without a full backup, a transaction log backup can’t be created without a full or differential backup first. So the transaction log backup can’t be used to start a backup chain. Let’s take the same example, and add a transaction log backup every 15 minutes (the blue lines represents the transaction log backups):

 
If the database crashed on the same time as the previous examples, your data loss is slimmed down from 20 hours to a maximum of 15 minutes. In order to recover your database, you need the Full backup created on Monday, the differential backup created on Friday, and all transaction log backups created after the differential backup at midnight. So if the database crash occurs a few seconds before the next transaction log backup, the maximum data loss is 15 minutes. Again, without a full or differential backup you can’t create a transaction log backup:

 
Pitfalls
Whenever you create a database, backup that database, and throw away that backup file, you can create a differential or transaction log backup. SQL Server doesn’t require the last full or differential backup (in case of a transaction log backup) to be present. So remember to always check if there is a valid backup available, either on the server or on your backup location.

 
Backup compression
From SQL Server 2008 onward, you can use a new feature called Backup Compression. Whether or not you’re compressing your backup can also make a big difference in performance. A compressed backup is smaller, so it requires less I/O when created, and can increase backup speed significantly. On the other hand, compressing a backup increases CPU usage. So it’s a tradeoff you need to consider. But in some cases, this could solve the problem of having a shortage on storage.

 
Files and Filegroups backup
Creating a file or filegroup backup can be practical when performance or database size is an issue for you. Perhaps taking a backup of your 500GB databases takes to long, and you need to consider other options.

You can backup all filegroups separately, but it’s also possible to combine a number of filesgroups in a single backup. This makes it easier to balance your backups over several disks when you’d like to. But perhaps it’s easier to create a backup that consists of multiple files. This can be achieved by adding more destination files at the bottom of the “create a backup”-GUI. SQL Server than balanced the data across the files you added.

Adding more destination files to your backup can also increase performance. Jes Schultz Borland (Blog | @grrl_geek) wrote a great article about that. She tested several options to see what the impact on performance is.

 
Copy-only backup
One of the most important backup options (in my opinion) is the copy-only backup. This allows you to create an ad-hoc backup without breaking the backup chain.

A copy-only backup works independently from any previous backup or backup plan. So a copy-only backup will not take the last LSN into account, or store the last LSN added to the copy-only backup you’re creating. So if you have a backup plan in place, and you or one of your colleagues needs to create an ad-hoc backup, copy-only is the way to go.

 
Now can we relax?
The short answer is: NO! Your work has only just begun. Now that you have a backup strategy, you need to build it, test it, tune it, and cherish it. Creating the perfect backup strategy isn’t a silver bullet. Your databases change, your processes change, your colleagues change…

So when is the last time you tried to restore a backup from your automated process? You can’t remember? Time to get it done than! You know what they say: A DBA is only as good as his last restore. So if you want to keep working as a DBA for your company, start preparing a test restore now.

 
And then…?
Once you’ve created a backup strategy, the hard work just begins. How are you implementing your backups? Are you planning on using the default SQL Server maintenance plans? Are you building something yourself with SQL Server Agent Jobs and SSIS packages? Maybe you want to buy a solution from a specific vendor you like or know? Well, what about a free solution?

If you’re looking for a cheap way out, building it yourself is a good option. But why not look at the completely free solution by the new MVP Ola Hallengren (Website | @olahallengren)?

It’s a solution used by many of our community members, and won a lot of prizes over the years. Not sure if it’s safe to use? Why don’t you look at the list of companies that use his solution, or read the blog post of Jonathan Kehayias (Blog | @SQLPoolBoy) about it.

Another great resource to start from is the TechNet page about backups. This contains a lot of information about the techniques behind the backup process, and the possible pitfalls you’ll encounter.

 
Conclusion
When creating a backup strategy, you need to take a lot of factors into account. What kind of hardware are you working with? Is the storage you need available? Is it possible to create a full backup every night, or only on weekends?

After building your (custom) solution, you need to spend time on tuning and maintaining it. Your databases aren’t static, and will change every second, every minute, every day. So keep changing your process to perform at it’s best, and in the end, you will create your own free time to spend on cool things.

T-SQL Tuesday #44 – The second chance

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “The second chance”. If you want to read the opening post, please click the image below to go to the party-starter: Bradley Ball (Blog | @SQLBalls).



 
This months topic isn’t easy for me. Even while I’m writing this, I’m still thinking about what that means to me personally. Second chances sound to good to be true. Doing something for a second time, and trying a different approach then the first time, in order to succeed…

Normally I try not to be a person that looks back at previous mistakes, but sometimes that’s easier said than done. But I must say, there are not a lot of things I regret in my life. Only one that’s really close to my heart, and nobody probably knows about. But I won’t bother you with that story…

 
Deserving a second chance
People always say: everybody deserves a second chance. But I don’t completely agree with that. Everyone makes mistakes, and that’s not something to be ashamed of. But if you make the same mistake twice, you need to start wondering if there’s something you could have done to prevent it. But even then, you deserve a second chance.

The people that actually know me, know I’m a nice guy, I’m honest (sometimes even a little bit too honest), and normally you can’t get me angry. But if you screw the same things up over and over again, I’m not that friendly anymore. Let me explain that with an example.

 
No, that wasn’t me!
A while ago I worked with someone who thought he was really good at his job. Personally, I had some issues with that opinion, but I gave him the benefit of the doubt. On a number of occasions he screwed things up, and I thought he should have known that what he was doing was never ever going to work. But still, I was willing to give him that second chance. But then he got me angry. And you won’t like me when I’m angry.

There were a number of SQL Server Agent Jobs running, and they locked up some of our tables and databases. When I asked him to look at that, he said he didn’t run those jobs, and focused on his screen again. So I asked him again, nicely, to look at it. He replied with the same answer.

A little bit angry, I told him the jobs were started on the server, and that he was the only one that was logged on to the server. Then he “suddenly” remembered he started the jobs, and said the locking wasn’t that bad. As a DBA, I took a deep breath, and counted to 10, and waited for him to fix the issue. But if you’re that stubborn, you’re clearly lying to me, and don’t even have the courage to tell me you screwed up, you don’t deserve a second chance in my opinion. At least be honest with yourself and to your colleagues!

 
Honesty get’s you a second chance
At this and previous companies I worked for, I always tried to teach the student and interns they need to be honest and listen to people with experience. Even if things go wrong, and you’re the one to blame, at least tell the truth. For me, that’s the difference between fixing the issue together and moving on, or letting him take the fall all on his own. But this is also an experience I got handed down to me by my colleagues a few years back. This is what happened to me, as I remember it:

When I started my first job in IT, I was offered a job as SQL Server Consultant. That meant that I was responsible for data conversions from different systems to our core system. When I took the job, I had never written a query before. But by listening to colleagues and my mentor (a good friend of mine who worked for the same company), I made it into a development team about 1.5 years after I started my first job.

That meant I was able to access the production system (yes, that’s where the problems began!). These permission were given to me, so I could solve data related issues in production. Until the day they asked me to update 5 rows in production. I checked and double checked the T-SQL statement I wrote, asked a colleague to take a look at it, and then took a break away from my computer so I could totally focus on this task when I got back.

I sat down again, looked at the query one last time, and pressed F5… One minute passed… Two minutes passed… And then the query finished… 50.000 rows affected… I slightly panicked, and noticed I only selected the update, a half “WHERE” clause, and no “BEGIN TRAN”… My heart started racing, and I picked up the phone and asked the system administrator (a good friend of mine, who worked at a different location) if he could restore the last backup for me, because I screwed up. After some questions, and some explanations about my mistake, the last thing he said, before he hung up the phone in anger, was “Which backup? The one that didn’t ran for the last few weeks?”.

I didn’t know what to do. How could I ever fix this? Almost every record in the table was updated, and there was no way of knowing what he old values of the updated records were. So it took all my courage to pick up the phone, and ring the system administrator again. All I heard on the other side of the phone was his evil laughter. Before I could ask him what was going on, he told me: “I’m glad you were honest to me. But don’t worry, I’m restoring the backup that was taken an hour ago. There’s no data lost”.

At that moment, I didn’t know what to think or feel. At first I wanted to slap him silly, but a few minutes later I wanted to thank him for his wonderful help. He probably saved my ass, and he never told anyone except my mentor (who also was my direct manager back then, and also a good friend of us both). A few days later, the three of us talked about it face to face, and eventually all laughed about the situation.

 
A wise lesson
But if I learned anything from that situation, besides never running an update without transaction or “WHERE” clause, is to be honest. Even though you might think the company will fire you for the mistake you made, it’s always better to tell them then letting them find out themselves. And that’s what I try to tell the students, interns, and junior colleagues I work with. Be honest, and then you earn a second chance…