Schedule and run Azure Automation runbooks via Webhook

In one of my previous posts I wrote about Azure Automation jobs to run your Azure SQL DB maintenance processes. This is a very easy way to schedule your maintenance, but there are some shortcomings when you look at the scheduler options and job outcome visibility.

In this blog post I’ll show you how you can schedule your runbooks through webhooks. Then you’re not bound to the minimal schedule of once per hour, that is a limit set for Azure Automation schedules.

 
Start with a runbook and webhook
In this post I’m using the runbook that I previously created for running database maintenance. Before I created a webhook, I removed the schedule on the runbook itself. This is to prevent confusion on which schedule started the runbook.

Now that the basics are set, click on the “Webhooks” button, and click “Add Webhook” in the blade that opened:

 
A new blade opens up in the portal, and this blade allows you to create a new webhook. Give the webhook a name and an expiration date (which is mandatory, even though you don’t want it to expire), and copy the URL. You need to store this in a secure place, because it will only be visible in this windows while creating the webhook. It can’t be found anywhere else if you loose this URL:

 
The webhook URL will look like this (this specific URL is changed, and won’t work): “https:// s2events.azure-automation.net/webhooks?token=eT0%0bKvfBcMeusbZw00RAwP0ZKGqxEUnThBIHAEB0Eq0%0d”

 
The webhook is ready to use, so now we can move on to creating a schedule.

 
Create a webhook scheduler
Just by creating a webhook alone the job will never run. In order to do that, we need to create a “Scheduler Job Collection”. In the Azure portal you need to go to “Scheduler Job Collection”, click on “Add”, and give your Scheduler Job Collection a name. If you already have a Job collection you can add this one to it, or create a new one. Now go to “Action settings” to configure the webhook URL for this scheduler:

 
Make sure you set the action to “HTTPS”, and the method to “Post”.

 
Under “Schedule” you can configure the actual schedule. As mentioned earlier, the schedule of a Azure Automation runbook can only be configured in hours. The schedule of a Job Scheduler can be configured in minutes, so this allows you to run a job multiple times an hour in case you need it:

 
Conclusion
In some cases, it might be needed to run an Azure Automation runbook multiple times an hour, but that’s (currently) not supported for runbooks. By using the webhook and job scheduler as a work-around you’re able to make the runbooks just a bit more useful. Before using the job scheduler, we were required to duplicate runbooks and configure the schedule to run on specific times, but this only made things more unclear. This works out for us, and hopefully for you too!

Advertisements

Easy data encryption in Azure

This article was recently also published on dev.getroadmap.com:

 
 
 
 

For those of you who use Azure today, the security discussion must have been a thing on some occasion. Explaining to managers (and possibly colleagues) that Azure is a lot more secure than a(n) (on-premise) data center, and that Azure is easier to maintain and scalable. Trust me, we’ve all been there!

But besides the physical security, there’s also the digital security. In the world of today it’s easier to find a data-breach on the news, then it is to find an item about a bank robbery. So how can you secure your data in Azure in an easy but solid way, without the hassle of changing your applications?

Encryption could be one of your tools to achieve a secure infrastructure and/or applications. But encryption is a challenge for pretty much everyone. Almost every day we hear about companies not doing it right, or not doing it at all. But luckily, Azure helps us with setting this up with just the click of a button.

Okay, okay, you got me. Maybe a few button clicks…

 
Databases
For your Azure SQL databases, there’s a feature called “Transparent Data Encryption”, or TDE for short. This encrypts your data at rest with “FIPS 140-2 validated 256 bit AES encryption”. Or, in normal words: you encrypt your data with an AES-256 encryption key.

So how do you enable it? There are 2 ways to do so, but I’ll only show you the route via the Azure portal. Information on how to do this via T-SQL can be found here.

First, login to the Azure portal, and navigate to the database you want to encrypt. Click on “Transparent Data Encryption”, and just with a click of a button you can encrypt your data:

 
This will start the encryption process and, depending on the size of the database, after a while you’ll see that the data is encrypted:

 
This feature will allow you to encrypt your database, without any application changes. This is because the encryption and decryption is being handled in an “intermediate layer” by Azure. The data will be decrypted before returning it to the client, and the other way around it will be encrypted before it’s stored. So your applications will continue to work without any changes in the application-code or connectionstring(s) to the database(s).

 
Storage Accounts encryption
There is also an option to encrypt your Storage Accounts in the same way as TDE works for Azure SQL databases (without any application changes). When you enable this on your Storage Account, please remember that only the new data will be encrypted, and that the existing data won’t be encrypted until it changes. For more information on this, please read this article, and this MSDN thread.

When you’re creating a new Storage Account, you can choose to encrypt it right away:

 
But when you want to encrypt an existing Storage Account with data in it, you need to do it on 2 different levels (it’s a separate setting for BLOB and files):

 
This will encrypt your data with the same algrorithm as TDE for SQL Server will do: “All data is encrypted using 256-bit AES encryption, one of the strongest block ciphers available.” (source).

 
Conclusion
For us as a company, enabling this features means that all of our data is encrypted. We’re only sending and receiving data from within Azure, so the communication is also secure. And even though the majority of our data is public data (publicly available such as flight information, etc.), it’s a safe feeling to know that all our data is encrypted when stored.

Running database maintenance on Azure SQL DB with Azure Automation

This article was also published on dev.getroadmap.com:

 
 
 
 
Running all of your databases in Azure SQL DB (the PaaS solution, also known as Azure SQL database) is a wonderful thing, except for one thing: you still need to run database maintenance. And because the SQL Server Agent is not available (which is a shame in my opinion), you have to find a way around that.

In one of my previous posts (Running maintenance on Azure SQL databases), I showed you how to leverage the power of the database maintenance solution, written by Ola Hallengren (Website | @olahallengren). But now that we’re moving into Azure completely, we’re losing that work-around. So how did we fix this?

 
Azure Automation
Microsoft gives you a tool to automate your processes with something called “Azure Automation”. With this tool you can schedule “Runbooks” that contain PowerShell. This allows you to execute the stored procedure that is executing the actual maintenance on your database.

 
Creating a runbook
In order to create a runbook, you need to login to the Azure portal, navigate to “Automation Accounts”, and create a new account:

 
When that is created (this usually only takes a few seconds), open the accounts, and click “Runbooks”. Click “Add a runbook” to create a new runbook, choose “Create a runbook”, and give your new runbook a name:

 
It is important that you choose “PowerShell Workflow” as Runbook type.

 
Adding the script
In this example I use the scripts I blogged about earlier. This allows me to just execute a single stored procedure to kick off the maintenance process.

By adding this to the Runbook a connection to the specific database is opened, and the stored procedure is executed:

 
For your benefit, here is the same script so you can copy-paste it:

    Write-Output "Ready to start inlinescript..."

    inlinescript
    {
        Write-Output "Creating connection..."

        # Create connection to database
        $DatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
        $DatabaseConnection.ConnectionString = "Data Source=Servername.database.windows.net;Initial Catalog=DatabaseName;Integrated Security=False;User ID=UserName;Password=Password"
        $DatabaseConnection.Open()

        Write-Output "Creating command..."

        # Create command
        $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
        $DatabaseCommand.Connection = $DatabaseConnection
        $DatabaseCommand.CommandText = "EXEC dba.ExecuteMaintenance"
        $DatabaseCommand.CommandTimeout = 0;

        Write-Output "Executing payload..."

        # Execute the query
        $DatabaseCommand.ExecuteNonQuery()

        Write-Output "Closing connection..."

        # Close connection to database
        $DatabaseConnection.Close() 
    }

 
To start using this Runbook, you don’t only need to save it, but also publish it:

 
Schedule
Now that you have a Runbook, you can schedule it to run at the time you want it to. You can do that by clicking on “Schedule”, and either create a new schedule or select an existing one (if you want to use a schedule you created earlier):

 
Conclusion
The Azure Automation is easy to use, and allows you to do almost anything on Azure by using PowerShell. But the only downside I found until now is that the job output and outcome is kind of tucked away into Azure. There’s no way to send out an email when a job fails for example (which was the setup we used on-premise).

There must be a way around that, but I haven’t found that until now. But if I do, that sounds like a good follow-up blog post! 🙂

Automatically convert contained database to non-contained for replication

This article was recently published on dev.getroadmap.com:

 
 
In one of my previous posts, I described how to setup replication to an Azure SQL database. This works like a charm, and I still highly recommend using this when you want to migrate data from an on-premise server (or Azure VM) to a Azure SQL db (PaaS).

But in our environment, we use SQL Server 2016 and contained databases for some of our datasets. Unfortunately (but totally understandable), you can’t setup replication from a contained database. So how do you deal with this? For our use-case, I’ve written a script to automatically change the database from contained to non-contained. And because I’m probably not the only one who needs to do this, I’d like to share it with you.

 
Steps to take
It might sound a bit difficult, but when you think about it, it’s actually quite easy. To go from a contained database with a user to a non-contained database with a login, you need to take the following steps:

 
1) Duplicate user as login on server level
2) Drop user on database level
3) Alter database to non-contained
4) Add login to database

 
The script
Before you start using this, I want to warn you. Using this script on your server(s) or in production is at your own risk. This worked for me, but maybe it doesn’t on your server(s)!

In order to let this script work, you must be able to stop applications and queries from executing on your database. In our case, we can just stop the service(s) that use a specific database (that’s the advantage of micro services). The reason for this is that you must be able to obtain an exclusive lock on the database, in order to switch from contained to non-contained.

If you can manage this, the script below could work for you too:

--====================================================================================================
/* 1 - Duplicate user on instance level as login */

USE master
GO

CREATE LOGIN [Login non-contained database] WITH PASSWORD=[Password]', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

--====================================================================================================
/* 2 - Drop user on database level */

USE [Contained database name]
GO

DROP USER [User contained database]
GO

--====================================================================================================
/* 3 - Alter database to non-contained */ 

USE master
GO

ALTER DATABASE [Contained database name] SET CONTAINMENT = NONE WITH NO_WAIT
GO

--====================================================================================================
/* 4 - Add login to database */

USE [Non-contained database name]
GO

CREATE USER [User non-contained database] FOR LOGIN [Login non-contained database]
GO

ALTER ROLE [db_datareader] ADD MEMBER [User non-contained database]
GO

ALTER ROLE [db_datawriter] ADD MEMBER [User non-contained database]
GO

--====================================================================================================

 
Or, to make it easier to read, an example with actual names:

--====================================================================================================
/* 1 - Duplicate user on instance level as login */

USE master
GO

CREATE LOGIN [Login_RW] WITH PASSWORD=N'Password123!', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

--====================================================================================================
/* 2 - Drop user on database level */

USE ContainedDatabase
GO

DROP USER Login_RW
GO

--====================================================================================================
/* 3 - Alter database to non-contained */ 

USE master
GO

ALTER DATABASE ContainedDatabase SET CONTAINMENT = NONE WITH NO_WAIT
GO

--====================================================================================================
/* 4 - Add login to database */

USE ContainedDatabase
GO

CREATE USER Login_RW FOR LOGIN Login_RW
GO

ALTER ROLE [db_datareader] ADD MEMBER Login_RW
GO

ALTER ROLE [db_datawriter] ADD MEMBER Login_RW
GO

--====================================================================================================

 
Conclusion
Even though I thought that using a contained database could be a big blocking factor for us in the process of migrating data to Azure, it really wasn’t that big of a deal. Especially now that we automated the process, it doesn’t add more than 5 minutes to the process of replicating data to Azure.

Build 2017: Administrating databases via Azure portal and Cloud Shell

This week I attended the Build conference in Seattle, and during the keynote on the first day (at around 1:01:00), Scott Hanselman (Blog | @shanselman) revealed the Cloud Shell integration in the Azure portal. This means that you can use Bash in the Azure portal as of the 10th of May 2017. If you ask me, that’s a HUGE addition to the portal, because now you can actually administer your Azure subscription by only using the portal instead of external tools!

 
Creating storage account for Bash
When you log in to the Azure portal and click on the button for Cloud Shell (top right corner), you are asked to create a storage account to persist your “$Home” directory. There are some costs involved, but it’s needed to make the integrated tool work:

 
Once you clicked the “Create storage” button, your Cloud Shell will be created:

 
Connecting to a database
Now that your Cloud Shell is ready to go, you can start using Bash. This means you can also use sqlcmd from within Bash.

You can connect to a database with sqlcmd, by using the following command:

sqlcmd -S servername.database.windows.net -U username -P password

 
Once the connection to your database has been made, you can run queries against it. For example, request all the database names from your server/instance:

 
Support on mobile apps
The Cloud Shell can’t only be found in the Azure Portal, but they also announced that the feature is included in the mobile apps for Android and iPhone. This allows you to administer your resources when you’re on the road as well.

 
Conclusion
Now that Microsoft is supporting both Windows and Linux on their Azure platform, the integration and usability of the portal needed to be improved as well. By adding Cloud Shell to the web portal, they’ve taken a huge step in my opinion. And pretty soon they are going to support PowerShell as well:

 
Even though this is one of the smallest announcements on Build this year, I think this might have a big impact on the administrative part of a lot of people and jobs out there. Especially now that you can use a single tool (the portal) for everything you need to do. Another example shown by Scott is creating a list of resources from Bash. Now at least that’ll save me some time, and I can’t be the only one…

Replication: Snapshot Agent fails on date conversion

This article was recently published on dev.getroadmap.com:

 
 

In the previous post I wrote, I explained how to setup replication from an on-premise SQL Server instance to an Azure SQL database. While doing this, I came across a very strange issue (or maybe even bug) when setting up replication.

 
The problem child
After working on reproducing the issue for a day, and trying to reduce the issue to a small-scale problem, I came to the conclusion that the problem was (probably) caused by a single primary key on a table in the database:

CREATE TABLE dbo.BuggedTable
	([Day] DATE NOT NULL,
	 SomeId VARCHAR(50) NOT NULL,
	 Amount INT NOT NULL,
	 CONSTRAINT PK_BuggedTable PRIMARY KEY CLUSTERED 
		([Day] ASC,
		SomeId ASC
		)
	)
GO

 
Creating publication & subscriber
The setup of the publication and subscriber wasn’t that difficult. As I said before, there are a few things you need to configure differently then you would do for SQL Server to SQL Server replication.

So I won’t talk you through the whole process again, but refer you to the articles instead.

 
Generate initial snapshot
Once the publication and subscription are in place, it’s time to generate the initial snapshot. The snapshot agent prepares the snapshot that contains the schema and data, needed to initialize the subscriber(s):

 
In some cases it takes a while, but in the end, I found this “warning” on my screen:

 
When you dig into this by opening the agents tab in the replication monitor, you see the actual error:

 
The complete error states:

Error messages:
Message: Query for data failed
Stack: at Microsoft.SqlServer.Replication.Snapshot.SqlServer.NativeBcpOutProvider.ThrowNativeBcpOutException(CConnection* pNativeConnectionWrapper)
at Microsoft.SqlServer.Replication.Snapshot.SqlServer.NativeBcpOutProvider.BcpOut(String strBcpObjectName, String strBcpObjectOwner, String strBaseBcpObjectName, Boolean fUnicodeConversion, String strDataFile, String strLoadOrderingHint, String strWhereClause, Boolean useTableLockHint, Int32 bcpFileFormatVersion)
at Microsoft.SqlServer.Replication.Snapshot.SqlServer.BcpOutThreadProvider.DoWork(WorkItem workItem)
at Microsoft.SqlServer.Replication.WorkerThread.NonExceptionBasedAgentThreadProc()
at Microsoft.SqlServer.Replication.AgentCore.BaseAgentThread.AgentThreadProcWrapper() (Source: MSSQLServer, Error number: 0)
Get help: http://help/0
Message: Conversion failed when converting date and/or time from character string.
Stack: (Source: MSSQLServer, Error number: 241)
Get help: http://help/241

 
First I thought that the DATE column being part of the primary key was the problem. But then again, another table in a different database had the combination of a DATE and a UNIQUEIDENTIFIER as a primary key. So that couldn’t be it.

 
Digging deeper
During my investigation to this issue, I found the following:

– There was enough space on the disk for the snapshot
– Other database with DATE in primary key (PK) worked (even though combination was DATE + UNIQUEIDENTIFIER, instead of DATE + VARCHAR)
– Adding “-UseInprocLoader” to Snapshot Agent job doesn’t work

So after digging a bit deeper, and trying to reduce the issue to a small-scale problem, I found:

– Reproduced in an empty (new) database, with just the failing table as single object in the database –> STILL FAILS
– Changed object in test DB to use newly created IDENTITY(1,1) as PK, and added Unique Constraint to old PK columns –> WORKS
– Make DATE column part of different PK (together with IDENTITY), and no Unique Constraint –> WORKS
– Changing the PK to just the VARCHAR column –> WORKS
– Changing the PK to just the DATE column –> WORKS
– Setting database on local instance (SQL 2016 Dev) to comp. 110, just like on test –> WORKS

After a lot of different variables in the test-setup, I found out that it’s probably an old bug that wasn’t properly patched when upgrading the SQL Server engine to a newer version. Let me elaborate on that:

– The bug is reproducible on the test server, which is an upgraded engine from SQL 2012 or 2014 to SQL 2016 RTM
– The bug is reproducible on the production server, which is an upgraded engine from SQL 2014 to SQL 2016 RTM
– The bug is not reproducible on a clean install of SQL 2014
– The bug is not reproducible on a clean install of SQL 2016 RTM
– The bug is not reproducible on a clean install of SQL vNext CTP

 
Finding a work-around
Because I couldn’t find a work-around for this issue, I requested the help from Justing Langford (Blog | @JustinLangford) from Coeo. He pointed me to an article that describes a few possible work-arounds.

For me, adding a row filter for the table did the trick:

 
As described in the article mentioned above, adding the row filter disables the BCP partioning for this article (table), and the snapshot agent completed without any problem:

 
Conclusion
Although this bug should’ve been fixed ages ago, it looks like it (re)appeared again after an engine upgrade. I’m not sure how this happened, but all I know is that it took me about 3 days to find, reproduce, reduce and work around the problem.

So hopefully this article will save you that time!

Setting up replication from on-premise SQL Server to Azure SQL DB

This article was recently published on dev.getroadmap.com:

 

For performance reasons we are looking for a way to split our write-operations from our read operations. A good way to do that is by duplicating the database onto another server (on-premise) or platform (Azure SQL DB). When you look at the options (Availability Groups (AG), database mirroring, clustering, replication, Azure sync groups, etc.) the easiest way to quickly duplicate a database and keep the data up-to-date at the same time is SQL Server replication.

Replication to another on-premise instance is easy. You just follow the steps in the wizard, it works out-of-the-box, and the chances of this process failing are small. With replicating data to an Azure SQL database it’s a bit more of a struggle. Just one single word took me a few HOURS of investigation and a lot of swearing…

 
Setting up replication and databases
Before you start with setting the whole thing up, make sure that the replication is installed on your instance. If this is not done yet, install the needed components via the installer. This component can be installed without rebooting the server, or the need to restart the instance.

If you needed to install replication, make sure you restart the SQL Server Agent. For some reason, the configuration change works on the instance without any problems, but the Agent can’t run the needed jobs until you restart it.

The next step is creating 2 databases: a publisher (on the on-premise instance) and a subscriber (an Azure SQL database):

 
And create a table that can be replicated:

CREATE TABLE dbo.SomeObject
	(ID INT IDENTITY(1,1),
	 SomeValue VARCHAR(100),
	 PRIMARY KEY (ID))

 
Setup the publisher
Now that you have 2 databases, it’s time to setup the publisher. I’m not guiding you through the whole replication setup (there are plenty of tutorials out there that are WAY better than my explanation), but I’ll show you the highlights.

First, start by clicking “New Publication” on your on-premise instance:

 
I chose to run the distributor on the same server.

If you follow the wizard, it will prompt you with the question on where to store the snapshots:

 
Copy this link into for example notepad, because it comes in handy later in the process.

As publication type, choose “Transactional publication”:

 
Again, follow the wizard and choose the objects you want to replicate (in my case just that single table):

 
For the security settings, I chose to use the SQL Server Agent

After finishing the steps in the wizard, give your publication a name, and finish this setup.

 
Create a subscriber
On the on-premise instance, right-click the newly created publication, and add a new subscriber:

 
The distribution agent will remain on the distributor (push subscription).

When you’re prompted to add a subscriber, choose “Add SQL Server Subscriber”:

 
Here comes the part that took me a while to figure out. When you try to replicate changes to an Azure SQL database, and you use the normal connection, it will fail to connect in the end. But if you use a secure connection, replication works like a charm. So make sure you use the following server name ([Server Name].database.secure.windows.net):

 
Adding that single word took me an hour or so to figure out. Hopefully this will save you a few hours of investigation, looking for dodgy error messages, ending up in finding an “errorcode 40852”, and leading you to the solution.

When this is set up, select the target database (subscriber):

 
When you’re prompted for the distribution agent security information, make sure you use a SQL Server login that has the correct permissions on the Azure SQL database:

 
Start the replication
Now that everything is in place and configured, it’s time to create the initial snapshot. This snapshot will be used to create all the objects and data that are supposed to be replicated on the Azure SQL database. Open the snapshot agent by clicking “View Snapshot Agent Status” on the publication:

 
The snapshot agent hasn’t run yet, so press “Start” to start the process:

 
When this process has completed, you should see something similar like this:

 
Now it’s time for a final test. Insert a record on the source database (publisher):

INSERT INTO dbo.SomeObject
	(SomeValue)
VALUES
	('Test value')

 
You should see this record pop-up on the subscriber after a few seconds (default latency is 5 seconds):

SELECT *
FROM dbo.SomeObject

 
You can also check the “Synchronization Status”, and you should see a command being replicated:

 
Conclusion
Out of all the available options, I still think replication is the easiest way to replicate data into Azure. The only downside is that there doesn’t seem to be a lot of information about this, especially when you look at the “database.SECURE.windows.net” part.

If you need more than just the one-way traffic, or you want a bit more functionality, maybe replication isn’t your weapon of choice. But for me, it worked like a charm… Eventually…