Automatically convert contained database to non-contained for replication

This article was recently published on dev.getroadmap.com:

 
 
In one of my previous posts, I described how to setup replication to an Azure SQL database. This works like a charm, and I still highly recommend using this when you want to migrate data from an on-premise server (or Azure VM) to a Azure SQL db (PaaS).

But in our environment, we use SQL Server 2016 and contained databases for some of our datasets. Unfortunately (but totally understandable), you can’t setup replication from a contained database. So how do you deal with this? For our use-case, I’ve written a script to automatically change the database from contained to non-contained. And because I’m probably not the only one who needs to do this, I’d like to share it with you.

 
Steps to take
It might sound a bit difficult, but when you think about it, it’s actually quite easy. To go from a contained database with a user to a non-contained database with a login, you need to take the following steps:

 
1) Duplicate user as login on server level
2) Drop user on database level
3) Alter database to non-contained
4) Add login to database

 
The script
Before you start using this, I want to warn you. Using this script on your server(s) or in production is at your own risk. This worked for me, but maybe it doesn’t on your server(s)!

In order to let this script work, you must be able to stop applications and queries from executing on your database. In our case, we can just stop the service(s) that use a specific database (that’s the advantage of micro services). The reason for this is that you must be able to obtain an exclusive lock on the database, in order to switch from contained to non-contained.

If you can manage this, the script below could work for you too:

--====================================================================================================
/* 1 - Duplicate user on instance level as login */

USE master
GO

CREATE LOGIN [Login non-contained database] WITH PASSWORD=[Password]', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

--====================================================================================================
/* 2 - Drop user on database level */

USE [Contained database name]
GO

DROP USER [User contained database]
GO

--====================================================================================================
/* 3 - Alter database to non-contained */ 

USE master
GO

ALTER DATABASE [Contained database name] SET CONTAINMENT = NONE WITH NO_WAIT
GO

--====================================================================================================
/* 4 - Add login to database */

USE [Non-contained database name]
GO

CREATE USER [User non-contained database] FOR LOGIN [Login non-contained database]
GO

ALTER ROLE [db_datareader] ADD MEMBER [User non-contained database]
GO

ALTER ROLE [db_datawriter] ADD MEMBER [User non-contained database]
GO

--====================================================================================================

 
Or, to make it easier to read, an example with actual names:

--====================================================================================================
/* 1 - Duplicate user on instance level as login */

USE master
GO

CREATE LOGIN [Login_RW] WITH PASSWORD=N'Password123!', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

--====================================================================================================
/* 2 - Drop user on database level */

USE ContainedDatabase
GO

DROP USER Login_RW
GO

--====================================================================================================
/* 3 - Alter database to non-contained */ 

USE master
GO

ALTER DATABASE ContainedDatabase SET CONTAINMENT = NONE WITH NO_WAIT
GO

--====================================================================================================
/* 4 - Add login to database */

USE ContainedDatabase
GO

CREATE USER Login_RW FOR LOGIN Login_RW
GO

ALTER ROLE [db_datareader] ADD MEMBER Login_RW
GO

ALTER ROLE [db_datawriter] ADD MEMBER Login_RW
GO

--====================================================================================================

 
Conclusion
Even though I thought that using a contained database could be a big blocking factor for us in the process of migrating data to Azure, it really wasn’t that big of a deal. Especially now that we automated the process, it doesn’t add more than 5 minutes to the process of replicating data to Azure.

Build 2017: Administrating databases via Azure portal and Cloud Shell

This week I attended the Build conference in Seattle, and during the keynote on the first day (at around 1:01:00), Scott Hanselman (Blog | @shanselman) revealed the Cloud Shell integration in the Azure portal. This means that you can use Bash in the Azure portal as of the 10th of May 2017. If you ask me, that’s a HUGE addition to the portal, because now you can actually administer your Azure subscription by only using the portal instead of external tools!

 
Creating storage account for Bash
When you log in to the Azure portal and click on the button for Cloud Shell (top right corner), you are asked to create a storage account to persist your “$Home” directory. There are some costs involved, but it’s needed to make the integrated tool work:

 
Once you clicked the “Create storage” button, your Cloud Shell will be created:

 
Connecting to a database
Now that your Cloud Shell is ready to go, you can start using Bash. This means you can also use sqlcmd from within Bash.

You can connect to a database with sqlcmd, by using the following command:

sqlcmd -S servername.database.windows.net -U username -P password

 
Once the connection to your database has been made, you can run queries against it. For example, request all the database names from your server/instance:

 
Support on mobile apps
The Cloud Shell can’t only be found in the Azure Portal, but they also announced that the feature is included in the mobile apps for Android and iPhone. This allows you to administer your resources when you’re on the road as well.

 
Conclusion
Now that Microsoft is supporting both Windows and Linux on their Azure platform, the integration and usability of the portal needed to be improved as well. By adding Cloud Shell to the web portal, they’ve taken a huge step in my opinion. And pretty soon they are going to support PowerShell as well:

 
Even though this is one of the smallest announcements on Build this year, I think this might have a big impact on the administrative part of a lot of people and jobs out there. Especially now that you can use a single tool (the portal) for everything you need to do. Another example shown by Scott is creating a list of resources from Bash. Now at least that’ll save me some time, and I can’t be the only one…

Replication: Snapshot Agent fails on date conversion

This article was recently published on dev.getroadmap.com:

 
 

In the previous post I wrote, I explained how to setup replication from an on-premise SQL Server instance to an Azure SQL database. While doing this, I came across a very strange issue (or maybe even bug) when setting up replication.

 
The problem child
After working on reproducing the issue for a day, and trying to reduce the issue to a small-scale problem, I came to the conclusion that the problem was (probably) caused by a single primary key on a table in the database:

CREATE TABLE dbo.BuggedTable
	([Day] DATE NOT NULL,
	 SomeId VARCHAR(50) NOT NULL,
	 Amount INT NOT NULL,
	 CONSTRAINT PK_BuggedTable PRIMARY KEY CLUSTERED 
		([Day] ASC,
		SomeId ASC
		)
	)
GO

 
Creating publication & subscriber
The setup of the publication and subscriber wasn’t that difficult. As I said before, there are a few things you need to configure differently then you would do for SQL Server to SQL Server replication.

So I won’t talk you through the whole process again, but refer you to the articles instead.

 
Generate initial snapshot
Once the publication and subscription are in place, it’s time to generate the initial snapshot. The snapshot agent prepares the snapshot that contains the schema and data, needed to initialize the subscriber(s):

 
In some cases it takes a while, but in the end, I found this “warning” on my screen:

 
When you dig into this by opening the agents tab in the replication monitor, you see the actual error:

 
The complete error states:

Error messages:
Message: Query for data failed
Stack: at Microsoft.SqlServer.Replication.Snapshot.SqlServer.NativeBcpOutProvider.ThrowNativeBcpOutException(CConnection* pNativeConnectionWrapper)
at Microsoft.SqlServer.Replication.Snapshot.SqlServer.NativeBcpOutProvider.BcpOut(String strBcpObjectName, String strBcpObjectOwner, String strBaseBcpObjectName, Boolean fUnicodeConversion, String strDataFile, String strLoadOrderingHint, String strWhereClause, Boolean useTableLockHint, Int32 bcpFileFormatVersion)
at Microsoft.SqlServer.Replication.Snapshot.SqlServer.BcpOutThreadProvider.DoWork(WorkItem workItem)
at Microsoft.SqlServer.Replication.WorkerThread.NonExceptionBasedAgentThreadProc()
at Microsoft.SqlServer.Replication.AgentCore.BaseAgentThread.AgentThreadProcWrapper() (Source: MSSQLServer, Error number: 0)
Get help: http://help/0
Message: Conversion failed when converting date and/or time from character string.
Stack: (Source: MSSQLServer, Error number: 241)
Get help: http://help/241

 
First I thought that the DATE column being part of the primary key was the problem. But then again, another table in a different database had the combination of a DATE and a UNIQUEIDENTIFIER as a primary key. So that couldn’t be it.

 
Digging deeper
During my investigation to this issue, I found the following:

– There was enough space on the disk for the snapshot
– Other database with DATE in primary key (PK) worked (even though combination was DATE + UNIQUEIDENTIFIER, instead of DATE + VARCHAR)
– Adding “-UseInprocLoader” to Snapshot Agent job doesn’t work

So after digging a bit deeper, and trying to reduce the issue to a small-scale problem, I found:

– Reproduced in an empty (new) database, with just the failing table as single object in the database –> STILL FAILS
– Changed object in test DB to use newly created IDENTITY(1,1) as PK, and added Unique Constraint to old PK columns –> WORKS
– Make DATE column part of different PK (together with IDENTITY), and no Unique Constraint –> WORKS
– Changing the PK to just the VARCHAR column –> WORKS
– Changing the PK to just the DATE column –> WORKS
– Setting database on local instance (SQL 2016 Dev) to comp. 110, just like on test –> WORKS

After a lot of different variables in the test-setup, I found out that it’s probably an old bug that wasn’t properly patched when upgrading the SQL Server engine to a newer version. Let me elaborate on that:

– The bug is reproducible on the test server, which is an upgraded engine from SQL 2012 or 2014 to SQL 2016 RTM
– The bug is reproducible on the production server, which is an upgraded engine from SQL 2014 to SQL 2016 RTM
– The bug is not reproducible on a clean install of SQL 2014
– The bug is not reproducible on a clean install of SQL 2016 RTM
– The bug is not reproducible on a clean install of SQL vNext CTP

 
Finding a work-around
Because I couldn’t find a work-around for this issue, I requested the help from Justing Langford (Blog | @JustinLangford) from Coeo. He pointed me to an article that describes a few possible work-arounds.

For me the row filter did the trick:

 
As described in the article mentioned above, adding the row filter disables the BCP partioning for this article (table), and the snapshot agent completed without any problem:

 
Conclusion
Although this bug should’ve been fixed ages ago, it looks like it (re)appeared again after an engine upgrade. I’m not sure how this happened, but all I know is that it took me about 3 days to find, reproduce, reduce and work around the problem.

So hopefully this article will save you that time!

T-SQL Tuesday #82 – To the cloud… And beyond!!!

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “To the cloud… And beyond!!!”. If you want to read the opening post, please click the image below to go to the invitation.



 
As I mentioned in the invitation, Azure allows us to easily host and scale our platform. But it also comes with a whole new set of challenges. Some of the observations I made in the last few months:

 
Agent is missing in action…
One of the biggest challenges for me when I started migrating stuff to Azure was the missing SQL Server Agent. I never thought about how often I actually used it until I missed it! There’s no more “let me quickly schedule this script for tomorrow morning” anymore.

And although there are multiple solutions to this problem available in Azure (Scheduler, Automation) I picked the easy way out. Because we’re running 2 platforms in parallel (an on-premise private cloud and Azure), I decided to create a VM in the private-cloud platform to run operational tasks like scheduled maintenance for example. This is not a solution I would like to keep for the upcoming years, but it allows us to easily run scheduled tasks, without losing time on figuring out how Azure Automation or Scheduler works.

For more information on this, please read this post.

 
Contained datastore
A thing that can make migrations to the cloud a bit more difficult, is that Azure SQL databases are basically a contained datastore (you would call it a “contained database” when you run it on-premise). This means that you (by default) can’t connect from one database to the other. This could mean that you need to rewrite your applications or stored procedures, or maybe even redesign your entire database/application/domain model.

This also means that running a stored procedure from the Ola Hallengren’s maintenance solution can only be done on the specific database, and not from the master database like the on-premise version does. These small challenges can be overcome, but it does mean code-duplication in your databases because the maintenance procedures need to be deployed to every single database.

 
Running database maintenance
Just like on-premise databases, Azure SQL databases also need to be serviced every now and then. To make sure I’m not reinventing the wheel I’ve re-used a solution that has proved its worth in practice: Ola Hallengren’s “SQL Server Maintenance Solution”.

But because every Azure SQL database is a contained datastore, I’ve made some minor changes to his solution. You can read more about that here.

 
Performance issues & alerting
One of the biggest differences for me between an on-premise database and an Azure SQL database is how I need to determine the cause of performance problems and solving them. When you’re working on a on-premise database it’s easy to just look at the CPU and RAM used by your instance, and watch the disk IO in the performance monitor in Windows. But you don’t have those in Azure of course. There you need to work with “DTU” percentages, and “Data IO” and “Log IO” counters. But DTU (Database Transaction Unit) makes monitoring performance a bit too abstract if you ask me. Just look at the explanation Microsoft gives us regarding DTU:

 

The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions.

For example, a Premium P11 database with 1750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs.

 
One thing I’m really happy with is that Adam Machanic published a new version of his sp_WhoIsActive for Azure a few months ago. This gives me the opportunity to quickly look at the results to find a cause of the performance issues.

 
When after a while you have a gut feeling about what the performance of your databases, you run into a new challenge with monitoring: the monitoring of DTU usage has a maximum retention of just 1 hour. This gives me another challenge, because I would like to see a longer retention. I definitely don’t want to over-scale my databases, so in some cases a long running process can trigger this alert. And when this happens during the day it’s annoying, but still okay. But it also woke me up in the middle of the night during an on-call rotation. This forced me to set some of the alerts on the maximum value of 45 minutes / 1 hour, and just disable others. In my opinion, this could use some work by Microsoft.

 
Conclusion
So in general I’m REALLY satisfied with the functionality that Azure SQL database provides us with, even though there’s always something to wish for of course. The move to Azure gives me as a DBA a new set of challenges and (in some cases) demands other/new skills, but it also allows me to do more than just manage a bunch of databases. I’m now also a part-time system administrator, network admin, (data)architect, developer, etc. This is something I personally really like, because it’s exactly that that allows me to broaden my horizon, and use a set of skills I didn’t for a while when working on on-premise databases.

So even though some people were afraid the DBA role would disappear because of the cloud-uprise, I’m convinced it will be here to stay except it will be a role with more skills needed than before, and I think that’s a good thing.

Invitation: T-SQL Tuesday #82 – To the cloud… And beyond!!!

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

 
This month I’ll be the T-SQL Tuesday host, and I’m really honored! Thanks for inviting me to be the host Adam!

 
The topic
When Adam asked me if I wanted to host another T-SQL Tuesday, I immediately knew a topic I wanted to talk about: The cloud, and (if you want to) specifically about Azure SQL database.

Last time we blogged about the cloud was back in december of 2013, when Jorge Segarra hosted this monthly party. Since then, “the cloud” (to use that buzz-word again) has changed a lot, and I think the possibilities are endless nowadays.

The reason I would like to see you all blog about this topic, is that I’m working with Azure SQL databases a lot now since I switched jobs in December. Currently I’m working for a small start-up that has a cloud-first focus. This means the main (if possible) host for our (data)platform is Azure. And although hosting our platform in Azure makes it easier for us to scale parts of that platform, it also gives us new challenges to overcome. And I’m probably not the only one with that experience…

So with that said, I would like to give you the opportunity to blog about the cloud, in the broadest sense that you can imagine. I’m imagining really interesting blogs about migrating to the cloud, missing features in Azure SQL database, how you’re determining the cause of and solving performance problems, etc. Please surprise us all with your view on the cloud.

 
The rules
– Your post must be published between 00:00:00 UTC and 23:59:59 UTC on Tuesday 13th of September
– Include the T-SQL Tuesday logo in the top of your post, and your post must link back to this one (trackback and comments are moderated, so it might take some time before they’re visible) or tweet about it using the hashtag #TSQL2sDay
– If you like this, check Steve Jones’ (Blog | @way0utwest) blog post that contains the list of topics, and contact Adam Machanic (Blog | @AdamMachanic) if you’d like to host this party yourself

Running maintenance on Azure SQL databases

This article was recently published on dev.getroadmap.com:

 

To keep your data healthy and topfit, we all know you need to run something called database maintenance on your databases. This prevents things like performance problems or unrecoverable data from happening, and that is sort of the core business of DBA’s. And when we look at how this can be performed on a regular basis, an on-premise instance is a bit easier to maintain than an Azure SQL database. That’s why I want to share our experience with you on that, hoping that it can save you some time when you start implementing this.

 
Platform setup
In order to get a better understanding of why certain choices were made, I want to share a very basic version of the setup of our environment.

Our platform consists of 2 mayor parts: a private cloud part (that we see as “on-premise”), and an Azure part. Those 2 parts combined make our platform, and this platform hosts all of our services, applications, databases, etc. The development approach we use is everything should be designed with cloud-first in mind, but only if it’s the right tool for the job, and with the exclusion of some data.

The databases we use in Azure are all what Microsoft calls “Azure SQL databases”. This means that there are no virtual machines of any kind are running on our Azure-part of the platform, also known as DBaas (Database as a Service).

When I draw this platform, and only focus on the data part, it will look something like this:

 
One of the advantages of this setup is that we can leverage the power of both parts of the platform. As you’ll see later on in this blog, there are some things that we (need to) run from the on-premise instances and some things fully on Azure.

 
Big shoutout to Ola
Before I’m going into detail, I want to give full kudos to Ola Hallengren (Website | @olahallengren). He has spend a lot of his time to build a SQL Server Maintenance Solution that is completely free for everyone to use. And he did such an excellent job a lot companies (also huge companies) use his solution to run maintenance tasks on their databases.

None of the scripts below are written by me, but only small changes are made in order to make things more clear when the solution is deployed to an environment. The original scripts can be downloaded via the download page on Ola’s website.

 
Backups & Integrity check
Taking backups of your database and making sure there is no corruption in the datafiles is an essential part of the maintenance solution written by Ola. But because Azure SQL databases have a build-in maintenance solution (for example backups: full backups weekly, differentials hourly, and transaction log every 5 minutes, source), we don’t need to worry about that ourselves.

 
Index maintenance & Update Statistics
Indexes and statistics are the core of your performance-based maintenance processes. These make sure your queries run fast, and should provide you with a stable and predictable performance. This is especially needed on an Azure database, since you can’t monitor it like you would with an on-premise database.

Because Azure SQL databases are run on shared hardware that you can’t monitor yourself, Microsoft provides us with a number of different performance counters that we can use to check the status/health of our databases. The most important counters are CPU usage, Data IO, Log IO and DTU usage (a combination of the previously mentioned counters). The DTU counter is the most abstract (to me at least), because it’s explained by Microsoft as:

 

The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions.

For example, a Premium P11 database with 1750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs.

 
And for me, who is was used to monitoring physical hardware, that is a bit of a different approach when digging into performance-related issues. And it’s not that index and statistics maintenance isn’t important when you work on a on-premise database, but it’s a slightly bigger challenge to monitor the direct effects of a index rebuild or statistics update.

But because every Azure SQL database is a contained database, you need to deploy the stored procedures from Ola’s solution to every single database. So to keep it clear for everyone which table and stored procedures belong to the maintenance solution, I’ve changed Ola’s scripts slightly to create all objects in a specific schema named “dba”. So first of all, let’s create the schema:

CREATE SCHEMA dba
GO

 
And then create the used to log all of the maintenance commands and their outcome:

CREATE TABLE dba.CommandLog
	(ID INT IDENTITY(1,1) NOT NULL CONSTRAINT PK_CommandLog PRIMARY KEY CLUSTERED,
	 DatabaseName sysname NULL,
	 SchemaName sysname NULL,
	 ObjectName sysname NULL,
	 ObjectType CHAR(2) NULL,
	 IndexName sysname NULL,
	 IndexType TINYINT NULL,
	 StatisticsName sysname NULL,
	 PartitionNumber INT NULL,
	 ExtendedInfo XML NULL,
	 Command NVARCHAR(MAX) NOT NULL,
	 CommandType NVARCHAR(60) NOT NULL,
	 StartTime DATETIME NOT NULL,
	 EndTime DATETIME NULL,
	 ErrorNumber INT NULL,
	 ErrorMessage NVARCHAR(MAX) NULL)
GO

 
Now that these are created, you can create the stored procedure that execute the actual index and statistics maintenance. They are too long to post here as code snippet, but you can download the CommandExecute script here, and the IndexOptimze script here.

But because we want to schedule these procedures later on, I decided to create an additional stored procedure in every database, that is deployed alongside the maintenance objects:

CREATE PROCEDURE dba.ExecuteMaintenance
AS


EXECUTE dba.IndexOptimize
 @Databases = '<Insert database name>',
 @FragmentationLow = 'INDEX_REORGANIZE',
 @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30,
 @UpdateStatistics = 'ALL',
 @LogToTable = 'Y',
 @Execute = 'Y'

GO

 
This allows us to run maintenance with specific options on the different databases.

Scheduling
After this was all set up, I needed to come up with a way to run these procedures on a schedule. And as you might know, Azure SQL databases don’t have a SQL Server Agent so that’s were our on-premise platform comes in handy. Just for this I created a new virtual machine in our private cloud, and installed SQL Server on that machine to utilize the SQL Server Agent. This server (operations server) runs all of the scheduled operational jobs (including maintenance and some monitoring jobs) on our Azure environment.

But to run a proces from this operations machine on one of our Azure databases I needed to create a linked server first:

EXEC sp_addlinkedserver 
 @server=N'ServerName__DatabaseName', 
 @srvproduct=N'Azure SQL Db',
 @provider=N'SQLNCLI', 
 @datasrc=N'<SERVERNAME>,1433',
 @catalog='DatbaseName';
 GO

EXEC sp_addlinkedsrvlogin 
 @rmtsrvname = 'ServerName__DatabaseName', 
 @useself = 'FALSE', 
 @locallogin=NULL,
 @rmtuser = '<USERNAME>',
 @rmtpassword = '<PASSWORD>'
 GO

 
And all that’s left now is to create a SQL Server Agent job, that executes the “ExecuteMaintenance” stored procedure on the Azure database:

EXEC [<LinkedServerName>].<DatabaseName>.dba.ExecuteMaintenance

 
The reason I choose this approach is that I didn’t want to lose any time on figuring out how the Azure Scheduler or Automation works, at the moment I was implementing the maintenance solution.

But if you want to have a good resource on that, please check out Pieter Vanhove (Blog | @Pieter_Vanhove) blog, because he wrote a great blog post about that.

 
Conclusion
When I look at how this solution is set up, I’m the first one who admits that this isn’t a perfect or ideal solution. But in the end, this gives the rest of the team a clear solution when they need to start or restart the maintenance process: Just log in to the operations server, start the SQL Server Management Studio (SSMS), open the SQL Server Agent jobs, find the job associated with the database they want to run the maintenance on, and that’s it. But for future scalability and maintainability, we might need to implement another solution to do this.

T-SQL Tuesday #48 – Cloud Atlas

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Cloud Atlas”. If you want to read the opening post, please click the image below to go to the party-starter: Jorge Segarra (Blog | @SQLChicken).



 
In the last few years, “the cloud” has become more and more important in our lives. Not only in IT, or as a database- or data-professionals, but also in our personal lives. Take a look around you. How many people do you still see, carrying around a USB drive to store their data? Now do the same, and count the people that use a cloud solution for their data, like Dropbox, SkyDrive (if we are still allowed to call it that…), or Copy.com?

So everyone is storing their data in the cloud now. From personal information like a copy of a passport, to text files with peoples password lists. So without jumping to conclusions just yet, I guess we trust the companies that hold our data right…?

 
Trust
But now comes the hard (and controversial) part: we trust them with our personal data, but not our corporate data. It’s okay to store your passwords and private documents in the cloud, but it’s unthinkable that you store business data in the cloud!

So where is that distrust coming from? It probably has something to do with the whole NSA-thing. There, I said it! Without completely going off-topic, I would like to explain something about this statement.

My personal opinion is that people in the Netherlands are different from the rest of the world, when it comes to their privacy. They don’t care if the ISP is monitoring web traffic. They know it’s being monitored, but they accept that as a fact. When it comes to downloading games, music or movies, they think their entitled to that. But when it comes to government agencies monitoring the corporate data they put in the cloud, they draw the line.

 
Are you… the one…?
In the past few years, the discussion about on premise and off premise data intensified. People try to convince each other with arguments, and think the other is completely wrong.

A while ago, I encountered my first “cloud-company”. I’ve done some consulting for them, and they’ve set themselves the goal to move to the cloud within the next few years. The biggest advantages they see are costs, scalability and administration. And I fully agree with them.

 
Why use a cloud solution
Choosing a WASD (Windows Azure SQL Database) solution makes it easier to maintain your environment. You don’t have to monitor the hardware, and move to another server if your hardware fails or dies. This is all being taken care of by Microsoft.

Looking at the cost of a cloud solution is easy: it saves you money. When you run on premise servers, where you need a data center, electricity, maintenance team, etc. When you use a cloud solution, you only pay for the hardware you need. And if you’re done with it, you can just shut down the machine you were working on.

The same goes for scalability. For example, if you need to run a billing process, you could “spawn” twice as many cloud machines. This makes scalability a piece of cake. And again, when your done, just get rid of the machines you don’t use anymore. This makes it easier for companies to run big processes in a smaller amount of time.

 
Trying it out
The only time I’ve used WASD is on the machine that Jamie Thomson (Blog | @jamiet) made available to the SQL Family (read about it here). This was later taken over by Red-Gate, but I’m not sure this is still available.

But if you want to try it out, just create your own AdventureWorks on Azure. You can download the scripts here, and start your Azure trial here.