Easy pivot: From key-value pairs to columns

This article was recently published on dev.getroadmap.com:

 

If there is one thing I pretty much hate doing in T-SQL it’s PIVOT and UNPIVOT. Even though I used it a few times in the last couple of years, it’s an adventure to find out how it works every time. And I know a lot of people struggle with this part of T-SQL, so let’s take a look at a (hopefully) simple example.

 
Key-Value pairs
The idea of using key-value pairs to store data isn’t new, but with the rapid development of cloud solutions like Azure and the hype around NoSQL databases, using key-value pairs to store data got a big boost. Especially developers (in my experience) love using key-value pair to store their data, because it’s easy for them to consume the data in an application. But it gives the database professional an extra challenge because we’re used to retrieve columns with values instead of a record per value. So how can we turn those key-value pairs into rows?

 
Flights
To show you how to change key-value pairs to columns, let’s create an example based on flights. If one of our customers needs to travel and they take the plane, there is some basic information we need to show them the flight status in the app. So let’s create a table to store that dataset:

CREATE TABLE dbo.Flights
(
	FlightId UNIQUEIDENTIFIER,
	[Key] VARCHAR(255),
	[Value] VARCHAR(255)
)

 
and insert a flight:

INSERT INTO dbo.Flights
	(FlightId, [Key], [Value])
VALUES
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'FlightId', '3074e015-62b2-4f76-a8b1-463c53cd79c5'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'AirlineIATACode', 'VY'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'FlightNumber', '8336'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureAirportCode', 'RTM'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureAirportName', 'Rotterdam The Hague Airport'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureDateTime', '2016-08-12 12:15'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'DepartureTerminal', '1'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'Gate', NULL),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'ArrivalAirportCode', 'BCN'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'ArrivalAirportName', 'Barcelona–El Prat'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'ScheduledArrivalDateTime', '2016-08-12 14:14'),
	('2A0C8B05-682A-41CE-8516-C6070CD92851', 'FlightIsCancelled', '0')

 
So now we have a dataset that consists of 12 key-value pairs:

 
Converting to columns
But when you’re running a query you would like to have columns instead of rows. So by using a PIVOT statement, you can convert these rows into columns:

SELECT
	FlightId,
	AirlineIATACode,
	FlightNumber,
	DepartureAirportCode,
	DepartureAirportName,
	DepartureDateTime,
	DepartureTerminal,
	Gate,
	ArrivalAirportCode,
	ArrivalAirportName,
	ScheduledArrivalDateTime,
	FlightIsCancelled
FROM
	(
		SELECT
			FlightId,
			[Key],
			[Value] 
		FROM dbo.Flights
	) AS SourceTable
PIVOT
	(
	MIN([Value])	--Needs to be an aggregate function
	FOR [Key] IN
		(
			AirlineIATACode,
			FlightNumber,
			DepartureAirportCode,
			DepartureAirportName,
			DepartureDateTime,
			DepartureTerminal,
			Gate,
			ArrivalAirportCode,
			ArrivalAirportName,
			ScheduledArrivalDateTime,
			FlightIsCancelled
		)
	) AS PivotOutput

 
Because the keys are always the same for flights, I can add those Key-names in the PIVOT statement. But because the PIVOT statement needs an aggregate function to retrieve the value, I needed to add the MIN() function but this doesn’t change the output in any way.

Now we have the same dataset, but converted into rows:

 
So from now on, converting key-value pairs to columns shouldn’t be a problem anymore!

Running maintenance on Azure SQL databases

This article was recently published on dev.getroadmap.com:

 

To keep your data healthy and topfit, we all know you need to run something called database maintenance on your databases. This prevents things like performance problems or unrecoverable data from happening, and that is sort of the core business of DBA’s. And when we look at how this can be performed on a regular basis, an on-premise instance is a bit easier to maintain than an Azure SQL database. That’s why I want to share our experience with you on that, hoping that it can save you some time when you start implementing this.

 
Platform setup
In order to get a better understanding of why certain choices were made, I want to share a very basic version of the setup of our environment.

Our platform consists of 2 mayor parts: a private cloud part (that we see as “on-premise”), and an Azure part. Those 2 parts combined make our platform, and this platform hosts all of our services, applications, databases, etc. The development approach we use is everything should be designed with cloud-first in mind, but only if it’s the right tool for the job, and with the exclusion of some data.

The databases we use in Azure are all what Microsoft calls “Azure SQL databases”. This means that there are no virtual machines of any kind are running on our Azure-part of the platform, also known as DBaas (Database as a Service).

When I draw this platform, and only focus on the data part, it will look something like this:

 
One of the advantages of this setup is that we can leverage the power of both parts of the platform. As you’ll see later on in this blog, there are some things that we (need to) run from the on-premise instances and some things fully on Azure.

 
Big shoutout to Ola
Before I’m going into detail, I want to give full kudos to Ola Hallengren (Website | @olahallengren). He has spend a lot of his time to build a SQL Server Maintenance Solution that is completely free for everyone to use. And he did such an excellent job a lot companies (also huge companies) use his solution to run maintenance tasks on their databases.

None of the scripts below are written by me, but only small changes are made in order to make things more clear when the solution is deployed to an environment. The original scripts can be downloaded via the download page on Ola’s website.

 
Backups & Integrity check
Taking backups of your database and making sure there is no corruption in the datafiles is an essential part of the maintenance solution written by Ola. But because Azure SQL databases have a build-in maintenance solution (for example backups: full backups weekly, differentials hourly, and transaction log every 5 minutes, source), we don’t need to worry about that ourselves.

 
Index maintenance & Update Statistics
Indexes and statistics are the core of your performance-based maintenance processes. These make sure your queries run fast, and should provide you with a stable and predictable performance. This is especially needed on an Azure database, since you can’t monitor it like you would with an on-premise database.

Because Azure SQL databases are run on shared hardware that you can’t monitor yourself, Microsoft provides us with a number of different performance counters that we can use to check the status/health of our databases. The most important counters are CPU usage, Data IO, Log IO and DTU usage (a combination of the previously mentioned counters). The DTU counter is the most abstract (to me at least), because it’s explained by Microsoft as:

 

The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions.

For example, a Premium P11 database with 1750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs.

 
And for me, who is was used to monitoring physical hardware, that is a bit of a different approach when digging into performance-related issues. And it’s not that index and statistics maintenance isn’t important when you work on a on-premise database, but it’s a slightly bigger challenge to monitor the direct effects of a index rebuild or statistics update.

But because every Azure SQL database is a contained database, you need to deploy the stored procedures from Ola’s solution to every single database. So to keep it clear for everyone which table and stored procedures belong to the maintenance solution, I’ve changed Ola’s scripts slightly to create all objects in a specific schema named “dba”. So first of all, let’s create the schema:

CREATE SCHEMA dba
GO

 
And then create the used to log all of the maintenance commands and their outcome:

CREATE TABLE dba.CommandLog
	(ID INT IDENTITY(1,1) NOT NULL CONSTRAINT PK_CommandLog PRIMARY KEY CLUSTERED,
	 DatabaseName sysname NULL,
	 SchemaName sysname NULL,
	 ObjectName sysname NULL,
	 ObjectType CHAR(2) NULL,
	 IndexName sysname NULL,
	 IndexType TINYINT NULL,
	 StatisticsName sysname NULL,
	 PartitionNumber INT NULL,
	 ExtendedInfo XML NULL,
	 Command NVARCHAR(MAX) NOT NULL,
	 CommandType NVARCHAR(60) NOT NULL,
	 StartTime DATETIME NOT NULL,
	 EndTime DATETIME NULL,
	 ErrorNumber INT NULL,
	 ErrorMessage NVARCHAR(MAX) NULL)
GO

 
Now that these are created, you can create the stored procedure that execute the actual index and statistics maintenance. They are too long to post here as code snippet, but you can download the CommandExecute script here, and the IndexOptimze script here.

But because we want to schedule these procedures later on, I decided to create an additional stored procedure in every database, that is deployed alongside the maintenance objects:

CREATE PROCEDURE dba.ExecuteMaintenance
AS


EXECUTE dba.IndexOptimize
 @Databases = '<Insert database name>',
 @FragmentationLow = 'INDEX_REORGANIZE',
 @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30,
 @UpdateStatistics = 'ALL',
 @LogToTable = 'Y',
 @Execute = 'Y'

GO

 
This allows us to run maintenance with specific options on the different databases.

Scheduling
After this was all set up, I needed to come up with a way to run these procedures on a schedule. And as you might know, Azure SQL databases don’t have a SQL Server Agent so that’s were our on-premise platform comes in handy. Just for this I created a new virtual machine in our private cloud, and installed SQL Server on that machine to utilize the SQL Server Agent. This server (operations server) runs all of the scheduled operational jobs (including maintenance and some monitoring jobs) on our Azure environment.

But to run a proces from this operations machine on one of our Azure databases I needed to create a linked server first:

EXEC sp_addlinkedserver 
 @server=N'ServerName__DatabaseName', 
 @srvproduct=N'Azure SQL Db',
 @provider=N'SQLNCLI', 
 @datasrc=N'<SERVERNAME>,1433',
 @catalog='DatbaseName';
 GO

EXEC sp_addlinkedsrvlogin 
 @rmtsrvname = 'ServerName__DatabaseName', 
 @useself = 'FALSE', 
 @locallogin=NULL,
 @rmtuser = '<USERNAME>',
 @rmtpassword = '<PASSWORD>'
 GO

 
And all that’s left now is to create a SQL Server Agent job, that executes the “ExecuteMaintenance” stored procedure on the Azure database:

EXEC [<LinkedServerName>].<DatabaseName>.dba.ExecuteMaintenance

 
The reason I choose this approach is that I didn’t want to lose any time on figuring out how the Azure Scheduler or Automation works, at the moment I was implementing the maintenance solution.

But if you want to have a good resource on that, please check out Pieter Vanhove (Blog | @Pieter_Vanhove) blog, because he wrote a great blog post about that.

 
Conclusion
When I look at how this solution is set up, I’m the first one who admits that this isn’t a perfect or ideal solution. But in the end, this gives the rest of the team a clear solution when they need to start or restart the maintenance process: Just log in to the operations server, start the SQL Server Management Studio (SSMS), open the SQL Server Agent jobs, find the job associated with the database they want to run the maintenance on, and that’s it. But for future scalability and maintainability, we might need to implement another solution to do this.

On which port is SQL Server running?

Earlier this week, a good friend asked me the simple question: “How can I check on which port my SQL Server is currently running?” There are multiple ways, so let’s take a look at the ways you can find out the answer!

 
SQL Server Error Log
It’s a misconception that only error messages are logged in the SQL Server error log. There are also informational messages logged, which you can use to find the port on which SQL Server is currently running:

xp_readerrorlog 0, 1, N'Server is listening on'

 
The result:

 
SQL Server Configuration Manager
The SQL Server configuation manager is a tool which is installed alongside SQL Server. In this tool, you can change for example the TCP/IP settings of your SQL Server. It also shows you the current port on which SQL Server is running:

 
DMV
You can also query the system DMV’s:

SELECT DISTINCT 
    local_tcp_port
FROM sys.dm_exec_connections
WHERE local_tcp_port IS NOT NULL

or

SELECT local_tcp_port
FROM   sys.dm_exec_connections
WHERE  session_id = @@SPID

 
Command prompt
Finding the used port via a command prompt requires some more information. First, you need to find the process ID (PID) that SQL Server is using. The quickest way is to use the Task Manager in Windows. Once you have the PID, you can run the commmand below:

“netstat -ano | findstr [PID]”

The result:

 
Registry
The port number can also be found in the registry. Just remember, that the folder you see in the screenshot below contains my instance name (SQL2014DEV). Change this to your instance name!

 
You can also use the “master.dbo.xp_regread” stored procedures to read this registry key from T-SQL:

DECLARE @TCPPort NVARCHAR(5),
        @RegKeyName VARCHAR(8000);

SET @RegKeyName = CONCAT('Software\Microsoft\Microsoft SQL Server\',
                         @@SERVICENAME,
                         '\MSSQLServer\SuperSocketNetLib\TCP')

EXEC xp_regread
  @rootkey = 'HKEY_LOCAL_MACHINE',
  @key = @RegKeyName,
  @value_name = 'TcpPort',
  @value = @TCPPort OUTPUT;

SELECT @TCPPort;

 
Event Viewer
SQL Server also logs the port in the Windows Event Viewer. If you open the application log, and you filter on eventid 26022, you’ll see the port that is in use:

 
PowerShell
After I posted this blog, Johan Bijnens (@alzdba) sent me a message: I forgot the PowerShell option!

#tcpport.ps1
#Evaluates the SQL Server instances on a Windows server and returns the TCP port number used by each instance
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO")  | Out-Null
$m = New-Object ('Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer') 'HOME'
$m.ServerInstances | ForEach-Object { $m.Name + '\' + $_.Name + ', ' +
       $m.ServerInstances[$_.Name].ServerProtocols['Tcp'].IPAddresses['IP1'].IPAddress.IPAddressToString + ':' +
       $m.ServerInstances[$_.Name].ServerProtocols['Tcp'].IPAddresses['IPAll'].IPAddressProperties['TcpDynamicPorts'].Value
       }

 
Thanks Johan for reminding me on this! The full code can be found on MSDN.

Find SQL Server Agent job history retention with T-SQL

In SQL Server, the Job Agent has its own retention that you can set. This retention determines how much execution history of SQL Server Agent jobs is being kept. But if you have a lot of instances across your domain, or if you reinstall a new server, how do you determine if this value is set correctly?

 
Doing it old-school: via the GUI
You can check the SQL Server agent retention via the GUI, by right-clicking on the SQL Server Agent, and clicking on properties. In the window that popped-up, click on History, and you’ll see something like this:

 
T-SQL / Registry
The way I wanted to check it is with T-SQL. Because I want to run the statement against a 40+ instances, I use a multi-server query for that. The reason I do that, is that this returns all the results in a single table, which makes reviewing and comparing the results a lot easier.

But there doesn’t seem to be a way to query the set values. But with a little bit of searching, I found out that you use the stored procedure below to change one of the values:

EXEC msdb.dbo.sp_set_sqlagent_properties @jobhistory_max_rows = 999

 
So by dissecting this procedure, I found out SQL Server just reads the value from a registry key. I took out the piece that I need to read the retention values:

DECLARE @jobhistory_max_rows INT = NULL,
        @jobhistory_max_rows_per_job INT = NULL


EXECUTE master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE',
                                        N'SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent',
                                        N'JobHistoryMaxRows',
                                        @jobhistory_max_rows OUTPUT,
                                        N'no_output'
SELECT @jobhistory_max_rows = ISNULL(@jobhistory_max_rows, -1)


EXECUTE master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE',
                                        N'SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent',
                                        N'JobHistoryMaxRowsPerJob',
                                        @jobhistory_max_rows_per_job OUTPUT,
                                        N'no_output'


SELECT @jobhistory_max_rows, @jobhistory_max_rows_per_job

 
But when I wanted to look up the values in the registry, to see if it returned the correct results, I couldn’t find the actual keys. I thought I was losing my mind, but there’s a catch…

 
xp_instance_regread vs xp_regread
There are 2 stored procedures you can use to read registry keys from your SQL Server:

DECLARE @jobhistory_max_rows INT = NULL

EXECUTE master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE',
                                        N'SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent',
                                        N'JobHistoryMaxRows',
                                        @jobhistory_max_rows OUTPUT,
                                        N'no_output'

SELECT @jobhistory_max_rows

 
or

DECLARE @jobhistory_max_rows INT = NULL

EXECUTE master.dbo.xp_regread N'HKEY_LOCAL_MACHINE',
                                        N'SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent',
                                        N'JobHistoryMaxRows',
                                        @jobhistory_max_rows OUTPUT,
                                        N'no_output'

SELECT @jobhistory_max_rows

 
The difference between the two is perfectly explained in this Stack Overflow thread:

xp_regread reads the literal registry path that you specify. xp_instance_regread “converts” the path you specify so that it matches the instance of SQL Server that you’re currently using.

 
So if you run multiple instances on your machine, you want to use the instance version of the stored procedure. If you’re just running 1 (default) instance, you can use both versions.

So eventually, I searched for the instance folder, and I did find the registry keys:

 
Conclusion
Especially for very large environments, it’s a good idea to write system- and/or server-checks in T-SQL. That way, you can script them in SCOM (or another monitoring tool), or if you use SQL Server Policy Based Management (and I would really like to recommend this Pluralsight course from a very good friend of mine: Boris Hristov (Blog | @BorisHristov)!), or just a multi-server query.

Restoring or Moving a database with CDC enabled

When you have CDC enabled on your database, and you want to move it on another instance or restore it somewhere, you need to take some extra steps to make sure CDC is transferred correctly. When asking myself the question how this works, I decided to do some testing.

 
Create test resources
In order to test this, we need to create some test resources. I’ve created a new database, added a table and enabled CDC:

CREATE DATABASE [CDC_DB]
 ON PRIMARY
(NAME = N'CDC_DB', FILENAME = N'C:\Databases\#Backup\CDC_DB.mdf')
 LOG ON
(NAME = N'CDC_DB_log', FILENAME = N'C:\Databases\#Backup\CDC_DB_log.ldf')
GO

USE CDC_DB
GO

CREATE TABLE dbo.Customer
  (CustomerID INT CONSTRAINT PK_Customer PRIMARY KEY IDENTITY(1,1),
   FirstName VARCHAR(50),
   LastName VARCHAR(50))
GO

EXEC sys.sp_cdc_enable_db
GO

EXEC sys.sp_cdc_enable_table
  @source_schema = 'dbo',
  @source_name = 'Customer',
  @supports_net_changes = 1,
  @role_name = NULL,
  @index_name = 'PK_Customer'
GO


INSERT INTO dbo.Customer
  (FirstName, LastName)
VALUES
  ('John', 'Doe')
GO

Now that we have the resources, we can take a backup of the database:

BACKUP DATABASE CDC_DB TO DISK = 'C:\Databases\#Backup\CDC_DB.bak'

 
Restoring the backup
To show you the default restore doesn’t work, let’s restore the backup next to the original database:

RESTORE DATABASE CDC_DB_RESTORE FROM DISK='C:\Databases\#Backup\CDC_DB.bak'
WITH
   MOVE 'CDC_DB' TO 'C:\Databases\#Backup\CDC_DB.mdf',
   MOVE 'CDC_DB_log' TO 'C:\Databases\#Backup\CDC_DB_log.ldf'

If we now try to select data from the change table of CDC, we get an error message:

 
But CDC was in the old database, so why wasn’t it restored? If you look for a solution, Books Online / MSDN doesn’t say anything about this. But luckily other community members blogged about this before (see this and this article). There is an option you can add to your restore:

USE master
GO

DROP DATABASE CDC_DB_RESTORE
GO

RESTORE DATABASE CDC_DB_RESTORE FROM DISK='C:\Temp\#BackupTest\CDC_DB.bak'
WITH
   MOVE 'CDC_DB' TO 'C:\Temp\#BackupTest\CDC_DB.mdf',
   MOVE 'CDC_DB_log' TO 'C:\Temp\#BackupTest\CDC_DB_log.ldf'
, KEEP_CDC
GO

 
If you add “KEEP_CDC” to your backup statement, it will also restore CDC. But that’s not your only option.

 
Detach and Attach

Another option you have is to detach your database, copy the files to another location, and attach the database again. Let’s detach the database first:

USE master
GO
ALTER DATABASE [CDC_DB_RESTORE] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
EXEC master.dbo.sp_detach_db @dbname = N'CDC_DB_RESTORE'
GO

 
Now that’s done, we can copy the files to another directory if needed. You could also just attach it on the same instance:

USE master
GO
CREATE DATABASE [CDC_DB_RESTORE] ON 
( FILENAME = N'C:\Temp\#BackupTest\CDC_DB.mdf' ),
( FILENAME = N'C:\Temp\#BackupTest\CDC_DB_log.ldf' )
 FOR ATTACH
GO

 
This also allows you to use CDC like nothing happened. The database is still CDC enabled, the change tables are there, and the captured data is still in the change table. But there’s one thing missing when you move your CDC database to another instance: the capture and cleanup jobs.

 
CDC Jobs
So how do we recover the jobs? Script them from one instance, and create them on the other instance? There’s a better way to do that. You can just run the statement below to add the jobs to your instance:

USE CDC_DB_RESTORE
GO
EXEC sys.sp_cdc_add_job 'capture'
GO
EXEC sys.sp_cdc_add_job 'cleanup'
GO

 
And if you move your database, you can also clean your instance, and run the statement below to remove the leftover jobs:

USE CDC_DB_RESTORE
GO
EXEC sys.sp_cdc_drop_job @job_type = N'capture'
GO
EXEC sys.sp_cdc_drop_job @job_type = N'cleanup'
GO

 
It is possible, but…
Moving or restoring a database with CDC gives you a few challenges, and even though there are some issues, it’s not impossible. But before you start, you need to think about the approach you need to take, in order to move/restore a database without losing CDC. Depending on your environment and possibilities, you might want to use a specific option, or perhaps can’t use the option you would like. There is a way around, but I would recommend you to test it first, before you run this on production databases!

T-SQL Tuesday #56 – Assumptions

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Assumptions “. If you want to read the opening post, please click the image below to go to the party-starter: Dev Nambi (Blog | @DevNambi).



 
This months topic is about assumptions. A few years back, I worked in a team that consisted of mainly .NET developers. Every time we mentioned “I think so…”, “I assume it works like this…” or “I think we should…”, one of them used the quote: “Assumption is the mother of all f*ck ups!”, which is a quote from the movie Under Siege 2: Dark Territory. But he was right. The moment you assume something, it’s going to blow up in your face in the end.

 
I tested it, and it works
Working for larger organizations should mean they are more prepared to certain things then smaller organizations. But I’ve seen large organization being badly prepared, or just plain unprepared. They assume their processes work, or they will never encounter failure at all.

One of the companies I worked for, took backups every night. Full backups. Of databases there were between 100 GB and 500+ GB… And they never tested a restore… Why? They used the default maintenance functionality with “Verify Backup Integrity” enabled, and they never needed to restore a database before. So they only took backups because management wanted that. They didn’t understand why, because their processes never failed, and would never fail in the future.

But one day, it wasn’t their processes that failed. A LUN went offline during the ETL process, and SQL Server naturally detected that. SQL Server put a database into suspect mode, because of database corruption. But because there were no backups, they needed to move to plan B: process about 2+ years of data (stored in XML files) again.

Eventually I solved it and recovered the database without the need of a restore, but it scared them. They now saw why they needed backups, and why they needed to test the restore on a regular basis. But they forgot about it after a few days, and we never got the time to change the maintenance processes or test any restore. After that, I made the best choice possible in my opinion: I found myself a new challenge.

 
If you don’t know what you’re talking about…
Another example of assumption I have seen a lot over the years, is people explaining stuff to other people, without any proper knowledge about a certain subject. I can recall a conversation between me and an intern. He was a .NET developer, and had some questions about how a T-SQL feature worked. Another junior BI developer started laughing, when the intern asked his question. “What a stupid question, everybody knows the answer to that!” he said. Kind of irritated by that, I asked him to provide the answer to that question. He didn’t want to. I asked him again: “you answer the question, because you laughed about it, and I want to hear the answer from you.”

He started to stutter, and he explained the functionality all wrong. When I explained it the right way, the .NET intern thanked me, and walked away with his new knowledge, ready to bring it into practice. The BI developer wanted to continue the discussion. “You’re all wrong! That feature doesn’t work that way!”. I nicely told him, that I used this feature on a daily basis, and that he was wrong. The discussion went on a little more, but I stopped the discussion by telling him: “I’m doing this for a number of years now, and working with SQL Server is what I do. You just started, and wrote your first query a few months ago. If you find any resources that show me being wrong, I’ll be happy to quit my job. Until then, please don’t explain T-SQL to other people if you don’t know what you’re talking about.”. Until this day, he never got back to me on this discussion.

 
I don’t need to check that!
Another great assumption you’ll see in several companies, are IT people that trust their own automation a little bit too much. The rule in IT is that if you need to do something more than once or a couple of times, you need to automate it. Automation is a good thing, and it can save you time. But who checks if your process doesn’t fail? You don’t want to build a system, that checks another system for you. One of the things I’ve seen is a developer that created an automated process, that checked a log table on a database server, and mailed new errors to the developer. Looking at this, it’s a perfect solution. You don’t have to monitor the log table by yourself, but an automated process does that for you.

At one moment, an application seemed to fail. It threw exceptions, and the end-users weren’t able to do anything with the application. The developer was called, and he told the users to contact the system administrator, because it must have been a server- or hardware problem. The system administrator called the developer after a few minutes, and told him the server and hardware were in perfect condition. The developer insisted his software wasn’t failing, because he didn’t receive any errors by email. But after a quick check, the developer came to the conclusion his automated process failed. The developer lost a lot of credits because of this attitude. As you see, this is another example of an assumption that went wrong.

 
Never stop asking questions
One of the most important things I wanted to show you with this blog post, is that if you don’t know the answer to a question, don’t be afraid to ask someone. The same goes for processes, tools, functionality, or any other question you want to ask. If people mock you for asking questions, they are the ones that are wrong. You’re just trying to learn and grow, so don’t feel bad about yourself!