Schedule and run Azure Automation runbooks via Webhook

In one of my previous posts I wrote about Azure Automation jobs to run your Azure SQL DB maintenance processes. This is a very easy way to schedule your maintenance, but there are some shortcomings when you look at the scheduler options and job outcome visibility.

In this blog post I’ll show you how you can schedule your runbooks through webhooks. Then you’re not bound to the minimal schedule of once per hour, that is a limit set for Azure Automation schedules.

 
Start with a runbook and webhook
In this post I’m using the runbook that I previously created for running database maintenance. Before I created a webhook, I removed the schedule on the runbook itself. This is to prevent confusion on which schedule started the runbook.

Now that the basics are set, click on the “Webhooks” button, and click “Add Webhook” in the blade that opened:

 
A new blade opens up in the portal, and this blade allows you to create a new webhook. Give the webhook a name and an expiration date (which is mandatory, even though you don’t want it to expire), and copy the URL. You need to store this in a secure place, because it will only be visible in this windows while creating the webhook. It can’t be found anywhere else if you loose this URL:

 
The webhook URL will look like this (this specific URL is changed, and won’t work): “https:// s2events.azure-automation.net/webhooks?token=eT0%0bKvfBcMeusbZw00RAwP0ZKGqxEUnThBIHAEB0Eq0%0d”

 
The webhook is ready to use, so now we can move on to creating a schedule.

 
Create a webhook scheduler
Just by creating a webhook alone the job will never run. In order to do that, we need to create a “Scheduler Job Collection”. In the Azure portal you need to go to “Scheduler Job Collection”, click on “Add”, and give your Scheduler Job Collection a name. If you already have a Job collection you can add this one to it, or create a new one. Now go to “Action settings” to configure the webhook URL for this scheduler:

 
Make sure you set the action to “HTTPS”, and the method to “Post”.

 
Under “Schedule” you can configure the actual schedule. As mentioned earlier, the schedule of a Azure Automation runbook can only be configured in hours. The schedule of a Job Scheduler can be configured in minutes, so this allows you to run a job multiple times an hour in case you need it:

 
Conclusion
In some cases, it might be needed to run an Azure Automation runbook multiple times an hour, but that’s (currently) not supported for runbooks. By using the webhook and job scheduler as a work-around you’re able to make the runbooks just a bit more useful. Before using the job scheduler, we were required to duplicate runbooks and configure the schedule to run on specific times, but this only made things more unclear. This works out for us, and hopefully for you too!

Advertisements

Running database maintenance on Azure SQL DB with Azure Automation

This article was also published on dev.getroadmap.com:

 
 
 
 
Running all of your databases in Azure SQL DB (the PaaS solution, also known as Azure SQL database) is a wonderful thing, except for one thing: you still need to run database maintenance. And because the SQL Server Agent is not available (which is a shame in my opinion), you have to find a way around that.

In one of my previous posts (Running maintenance on Azure SQL databases), I showed you how to leverage the power of the database maintenance solution, written by Ola Hallengren (Website | @olahallengren). But now that we’re moving into Azure completely, we’re losing that work-around. So how did we fix this?

 
Azure Automation
Microsoft gives you a tool to automate your processes with something called “Azure Automation”. With this tool you can schedule “Runbooks” that contain PowerShell. This allows you to execute the stored procedure that is executing the actual maintenance on your database.

 
Creating a runbook
In order to create a runbook, you need to login to the Azure portal, navigate to “Automation Accounts”, and create a new account:

 
When that is created (this usually only takes a few seconds), open the accounts, and click “Runbooks”. Click “Add a runbook” to create a new runbook, choose “Create a runbook”, and give your new runbook a name:

 
It is important that you choose “PowerShell Workflow” as Runbook type.

 
Adding the script
In this example I use the scripts I blogged about earlier. This allows me to just execute a single stored procedure to kick off the maintenance process.

By adding this to the Runbook a connection to the specific database is opened, and the stored procedure is executed:

 
For your benefit, here is the same script so you can copy-paste it:

    Write-Output "Ready to start inlinescript..."

    inlinescript
    {
        Write-Output "Creating connection..."

        # Create connection to database
        $DatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
        $DatabaseConnection.ConnectionString = "Data Source=Servername.database.windows.net;Initial Catalog=DatabaseName;Integrated Security=False;User ID=UserName;Password=Password"
        $DatabaseConnection.Open()

        Write-Output "Creating command..."

        # Create command
        $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
        $DatabaseCommand.Connection = $DatabaseConnection
        $DatabaseCommand.CommandText = "EXEC dba.ExecuteMaintenance"
        $DatabaseCommand.CommandTimeout = 0;

        Write-Output "Executing payload..."

        # Execute the query
        $DatabaseCommand.ExecuteNonQuery()

        Write-Output "Closing connection..."

        # Close connection to database
        $DatabaseConnection.Close() 
    }

 
To start using this Runbook, you don’t only need to save it, but also publish it:

 
Schedule
Now that you have a Runbook, you can schedule it to run at the time you want it to. You can do that by clicking on “Schedule”, and either create a new schedule or select an existing one (if you want to use a schedule you created earlier):

 
Conclusion
The Azure Automation is easy to use, and allows you to do almost anything on Azure by using PowerShell. But the only downside I found until now is that the job output and outcome is kind of tucked away into Azure. There’s no way to send out an email when a job fails for example (which was the setup we used on-premise).

There must be a way around that, but I haven’t found that until now. But if I do, that sounds like a good follow-up blog post! 🙂

Backup and relax?

Keeping a good backup strategy is a must for every DBA and database developer. But just creating a backup isn’t enough. Perhaps you don’t have enough storage to store the full backups of your database for over a week. Or taking a full backup of a database takes so long, it’s only possible on weekends. So what are your options?

 
RPO and RTO
Your whole backup strategy starts by determining the RPO (Recovery Point Objective) and RTO (Recovery Time Objective). The great Brent Ozar (Blog | @BrentO) wrote a great blog post about these terms, and explains what they mean.

Basically it means that you need to determine what maximum data loss is allowed, and from there you start creating a backup strategy. So how do you determine these? This is how RPO and RTO look like if you visualize them:

 
Storage
Another thing you want to consider is the storage available for your backups. Most of the time the backups will be stored on a NAS (Network-attached storage), and not on the local server, so storage isn’t a big issue in that case.

I’ve also seen companies that created the backup on the local server, and after completion copied it to a network location. In my opinion it’s only one more dependency that you could prevent, but other than that it’s a valid option.

 
Backup types
SQL Server supports multiple backup options. They all have their pros and cons, and give you the ability to create a backup strategy that fits your needs. In this blog post, I’m assuming the database that we work with is created as a full recovery model database.

 
Full backup
A full backup takes a backup of the entire database. With this backup file you’ll be able to recover the entire database, without the need of extra log files. Creating a full backup can take more time, depending on the size of the database. Let’s visualize this with an example:

 
Looking at the example you’ll see that every night a full backup is created. But on Friday night 8 PM the database crashes, and we need to recover from backup. The last full backup was taken 20 hours ago, so those 20 hours of data changes are lost.

 
Differential backup
If you have less time to spend on backing up your database every night, one of your options is to take a differential backup. A differential backup only backs up data that is changed since the last full backup. A differential backup can’t be created without taking a full backup first. If you try to create it without a full copy of the database, SQL Server will throw an error:

 
When you create a full backup, a backup chain is started. This means that SQL Server registers which LSN (Log Sequence Number) was added to the last backup. When you take the next backup, the backup will contain all transactions from the last LSN of the previous backup until the time of your new backup.

A backup chain can’t be started with a differential backup. Also, when you want to restore a differential backup, you need the full backup it’s based on. To put this into perspective, look at the visualization below:

 
At midnight on Monday we create a full backup of the database. Every other day we create a differential backup at midnight. On Friday at 8 PM the database crashes, and we need to restore a backup. All we need is the full backup from Monday morning, and differential backup 4. Although it takes less time to create a differential backup, you see that this hasn’t helped you much. You still lost 20 hours of data.

 
Transaction Log backup
The last major type of backup is a transaction log backup. This backup contains all transactions that were executed after the last full or differential backup were created. This gives you the opportunity to perform the so called “point-in-time recovery”.

Just like the differential backup can’t be created without a full backup, a transaction log backup can’t be created without a full or differential backup first. So the transaction log backup can’t be used to start a backup chain. Let’s take the same example, and add a transaction log backup every 15 minutes (the blue lines represents the transaction log backups):

 
If the database crashed on the same time as the previous examples, your data loss is slimmed down from 20 hours to a maximum of 15 minutes. In order to recover your database, you need the Full backup created on Monday, the differential backup created on Friday, and all transaction log backups created after the differential backup at midnight. So if the database crash occurs a few seconds before the next transaction log backup, the maximum data loss is 15 minutes. Again, without a full or differential backup you can’t create a transaction log backup:

 
Pitfalls
Whenever you create a database, backup that database, and throw away that backup file, you can create a differential or transaction log backup. SQL Server doesn’t require the last full or differential backup (in case of a transaction log backup) to be present. So remember to always check if there is a valid backup available, either on the server or on your backup location.

 
Backup compression
From SQL Server 2008 onward, you can use a new feature called Backup Compression. Whether or not you’re compressing your backup can also make a big difference in performance. A compressed backup is smaller, so it requires less I/O when created, and can increase backup speed significantly. On the other hand, compressing a backup increases CPU usage. So it’s a tradeoff you need to consider. But in some cases, this could solve the problem of having a shortage on storage.

 
Files and Filegroups backup
Creating a file or filegroup backup can be practical when performance or database size is an issue for you. Perhaps taking a backup of your 500GB databases takes to long, and you need to consider other options.

You can backup all filegroups separately, but it’s also possible to combine a number of filesgroups in a single backup. This makes it easier to balance your backups over several disks when you’d like to. But perhaps it’s easier to create a backup that consists of multiple files. This can be achieved by adding more destination files at the bottom of the “create a backup”-GUI. SQL Server than balanced the data across the files you added.

Adding more destination files to your backup can also increase performance. Jes Schultz Borland (Blog | @grrl_geek) wrote a great article about that. She tested several options to see what the impact on performance is.

 
Copy-only backup
One of the most important backup options (in my opinion) is the copy-only backup. This allows you to create an ad-hoc backup without breaking the backup chain.

A copy-only backup works independently from any previous backup or backup plan. So a copy-only backup will not take the last LSN into account, or store the last LSN added to the copy-only backup you’re creating. So if you have a backup plan in place, and you or one of your colleagues needs to create an ad-hoc backup, copy-only is the way to go.

 
Now can we relax?
The short answer is: NO! Your work has only just begun. Now that you have a backup strategy, you need to build it, test it, tune it, and cherish it. Creating the perfect backup strategy isn’t a silver bullet. Your databases change, your processes change, your colleagues change…

So when is the last time you tried to restore a backup from your automated process? You can’t remember? Time to get it done than! You know what they say: A DBA is only as good as his last restore. So if you want to keep working as a DBA for your company, start preparing a test restore now.

 
And then…?
Once you’ve created a backup strategy, the hard work just begins. How are you implementing your backups? Are you planning on using the default SQL Server maintenance plans? Are you building something yourself with SQL Server Agent Jobs and SSIS packages? Maybe you want to buy a solution from a specific vendor you like or know? Well, what about a free solution?

If you’re looking for a cheap way out, building it yourself is a good option. But why not look at the completely free solution by the new MVP Ola Hallengren (Website | @olahallengren)?

It’s a solution used by many of our community members, and won a lot of prizes over the years. Not sure if it’s safe to use? Why don’t you look at the list of companies that use his solution, or read the blog post of Jonathan Kehayias (Blog | @SQLPoolBoy) about it.

Another great resource to start from is the TechNet page about backups. This contains a lot of information about the techniques behind the backup process, and the possible pitfalls you’ll encounter.

 
Conclusion
When creating a backup strategy, you need to take a lot of factors into account. What kind of hardware are you working with? Is the storage you need available? Is it possible to create a full backup every night, or only on weekends?

After building your (custom) solution, you need to spend time on tuning and maintaining it. Your databases aren’t static, and will change every second, every minute, every day. So keep changing your process to perform at it’s best, and in the end, you will create your own free time to spend on cool things.

T-SQL Tuesday #39 – Can you shell what the PoSH is Cooking?

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Can you shell what the PoSH is Cooking?”. If you want to read the opening post, please click the image below to go to the party-starter: Wayne Sheffield (Blog | @DBAWayne).



A few months ago I attended a session of Jeff Wouters (Blog | @JeffWouters) about PowerShell. He talked about how powerful PowerShell really is, and showed us some examples. This was my first encounter with the “commandline on steroids”. Back then I didn’t think I’d use that any time soon, but this is changing fast! The wide variety of possibilities that PowerShell offers made me curious, but I never found the time and use for it. Until last week…

A great number of things have changed the last few weeks, and that gave me the opportunity to play around with PowerShell. Starting with reading a tutorial and creating my first few trial scripts, it soon hit me that it isn’t that easy. But on the other side, if you succeed at writing a script, it’s very powerful!

PowerShell ISE
One of the very first things I tried after staring the PowerShell ISE (the “studio” in which you can write your scripts) was trying to find all objects in my database. The first thing you need to do is:

#Set the Execution-Policy to Unrestricted
Set-ExecutionPolicy Unrestricted

#Run the Import-Module below once, so you can use the SQLSERVER functionality
Import-Module SQLPS

This will allow you to access your SQL Server objects. After that, you can start with the easy stuff like:

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\Tables

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\Views

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\StoredProcedures

dir SQLSERVER:sql\localhost\SQL2012\databases\Sandbox\Users

This is very basic, and real easy to find out how this works. From Powershell, the results will look like this:

And from the “PowerShell ISE”, it will look like this:

Another thing I tried is to automatically create a database on my local server:

[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") 
[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") 

$serverInstance = "(local)\SQL2012"
$conn = New-Object -TypeName Microsoft.SqlServer.Management.Common.ServerConnection -ArgumentList $serverInstance 
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -ArgumentList $conn

$newdb = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database($server, "PowerShellToTheRescue")
$newdb.Create()

This script will create a new database called “PowerShellToTheRescue” on your local SQL Server. I assume it’s also possible to create tables in the database, but I haven’t found the time to test that.

PowerShell trough SQL Server Management Studio (SSMS)
Another option is to start PowerShell via a menu in SSMS. This means PowerShell is starting with the current database as default context. So then you don’t have to use the “Import-Module”, and you can start running your query right away. You can find this option by right-clicking your database, and choose “Start PowerShell”:

So, what are you saying?!
This months T-SQL Tuesday post isn’t that impressive, I know! But what I’m trying to say is that if you try PowerShell, you might find that it’s quite powerful and fun.

And thinking about it, I guess you can expect some more PowerShell posts from me in the next few weeks! 😉