Avoid a big SSISDB by logging less…?

Last week I blogged about how logging can grind your SSIS to a halt. After posting the blog, I got a really interesting reaction from Koen Verbeeck (Blog | @Ko_Ver):

I knew there were different logging levels in SSIS, but I couldn’t recall the difference in levels. So I discussed about the difference with Koen. At first glance we thought that the biggest difference is the fact that performance logging only logs warnings and errors. So basically, you decrease the amount of messages logged, which should results in a slightly better performance. But is that really the case?

 
Different logging levels
In SSIS there are 4 different logging levels. So what are the options, and what’s the difference between them? You can read all about it on MSDN, but the short version is:

None: Logging is turned off
Basic (default value): All evens are logged, except custom- and diagnostic events
Performance: Only performance statistics, OnError and OnWarning events are logged
Verbose: All events are logged

 
The next questions is: where to change these options. For example, how do I change my logging level to performance?

 
SSISDB / Catalog
When you want to change the logging level for all your deployed SSIS packages, you could change the setting on your catalog. You can do that by opening the “Integration Services Catalogs” on your instance, right-click on your catalog, and choose an option for “Server-wide Default Logging Level”:

 
If you change this, the logging level for all packages in this catalog will change. But maybe you don’t want that, and you only want it for a specific package.

 
SQL Server Agent Job
Another option is do configure this in your SQL Server Agent job:

 
If you use this option, the logging level will change for the package that is executed in the job step. So no permanent change, but only for the duration of the job.

 
At package execution
Another option is to do it at package runtime:

 
Personally I don’t use that option of executing packages, but you might. The same applies as the change on the SQL Server Agent job, this will only change the option for the duration of the execution, so no permanent change.

 
How to determine your best choice
Making a good decision is difficult in this case. When something goes wrong, you want to log everything. But when things run smoothly, it’s a waste of time to log every event that passes by in SSIS. And how do you determine the level of logging you need, without knowing what data is in your SSISDB? If you want to analyze that, you could use a query like this:

SELECT
  EventCounts.EventName,
  EventCounts.NumberOfEvents,
  CONVERT(FLOAT,(CONVERT(FLOAT,[NumberOfEvents]) / SUM(NumberOfEvents) OVER ()) * 100) AS 'Percentage'
FROM
  (
    SELECT
      EM.event_name AS 'EventName',
      COUNT(*) AS 'NumberOfEvents'
    FROM SSISDB.catalog.event_messages AS EM
    WHERE EM.event_name IS NOT NULL
    GROUP BY EM.event_name
  ) EventCounts
ORDER BY EventCounts.EventName ASC

 
This shows you the type of events stored in your SSISDB, the amount of events, and a percentage over the whole dataset. This can help you determine the logging level you need in your specific case.

 
But here’s the catch…
Performance logging doesn’t actually make your packages run faster… Koen sent me this blog post from Matt Masson (Blog | @mattmasson). In his blog post, he explains what events are logged at the specific levels. And this is what he says about performance level:

The Performance log level should be used when you are doing benchmarking and performance tuning for your packages. While it actually logs less messages to the [catalog].[operation_messages] view than Basic, it captures a lot more events internally to analyze the performance of the data flow components. As a result, there is a bit more overhead during execution – packages run with Basic will actually run a little faster than Performance (in this case Performance means “give me all of the performance details”, not “run my packages as fast as you can”).

 
Conclusion
Even though SSIS is easy to use, there are some pros and cons. The longer I work with SSIS (and that’s not on a daily basis), the more of these pitfalls I discover. And even though they can be fixed pretty fast, it normally takes you time to figure out the problem, because I’m not that familiar with the inner-workings of SSIS. And even when you think you have found a solution for your problem, SSIS just works slightly different than expected. I’m just glad I have friends like Koen to help me out when needed! Thanks again Koen!

SSISDB: Why bigger isn’t always better

Two weeks ago I encountered some strange issues with the SSISDB. Packages and processes started failing, and all we could find is this error message:

 
The image above is an extract of the standard SSMS “All Executions” report. You can find this by right-clicking your SSISDB, click on Reports, then Standard Reports, and choose the “All Executions” report.

While the packages started failing, we couldn’t find a cause. Deploying new packages wasn’t possible either. There were no events logged in the report, in the Windows event viewer, or in the SQL Server error log. So there was no starting point to further investigate this issue.

 
Finding the cause
Checking the size of the SSISDB showed some possible cause of the issue: it was 72GB big! This seemed odd, because we don’t store a lot of packages in the SSISDB. I knew SSISDB contained some logging information, but I didn’t knew how much. Checking the “Disk usage per table” report (another standard report in SSMS), the problem became a bit more obvious:

 

 
So basically there were over 67 million records in the database! This shouldn’t have happened, because the log retention should be set so 14 days. But we quickly found the problem:

 

 
The retention was set to 365 days. This could only mean the the SSISDB was re-created, and this setting wasn’t changed after the deploy. This resulted in a lot of extra logging data in the database. Because of the amount of record, the standard “SSISDB Maintenance job” ran for 40+ minutes, instead of a maximum of 8 minutes (which it normally needs to clean up logging). But setting the retention to 14 days and running the maintenance job would just result in an endless running job. So how could you prevent that?

 
Grabbing the bigger hammer
In most cases, grabbing a bigger hammer to solve the problem isn’t your best option. In this case, I didn’t see another way. After some searching, I found this blog post about truncating the log tables in SSISDB. Below you find the copy of the script. The credits for this script go to Ibrahim Naji (Blog | @thinknook).


CREATE PROCEDURE [internal].[cleanup_server_retention_window_truncateall]
AS  
     
SET NOCOUNT ON
     
DECLARE @enable_clean_operation bit
DECLARE @retention_window_length INT
     
DECLARE @caller_name nvarchar(256)
DECLARE @caller_sid  varbinary(85)
DECLARE @operation_id BIGINT
     
EXECUTE AS CALLER
    SET @caller_name =  SUSER_NAME()
    SET @caller_sid =   SUSER_SID()
REVERT
          
     
BEGIN TRY
    SELECT @enable_clean_operation = CONVERT(bit, property_value) 
        FROM [catalog].[catalog_properties]
        WHERE property_name = 'OPERATION_CLEANUP_ENABLED'
         
    IF @enable_clean_operation = 1
    BEGIN
        SELECT @retention_window_length = CONVERT(INT,property_value)  
            FROM [catalog].[catalog_properties]
            WHERE property_name = 'RETENTION_WINDOW'
                 
        IF @retention_window_length <= 0 
        BEGIN
            RAISERROR(27163    ,16,1,'RETENTION_WINDOW')
        END
             
        INSERT INTO [internal].[operations] (
            [operation_type],  
            [created_time], 
            [object_type],
            [object_id],
            [object_name],
            [STATUS], 
            [start_time],
            [caller_sid], 
            [caller_name]
            )
        VALUES (
            2,
            SYSDATETIMEOFFSET(),
            NULL,                     
            NULL,                     
            NULL,                     
            1,      
            SYSDATETIMEOFFSET(),
            @caller_sid,            
            @caller_name            
            ) 
        SET @operation_id = SCOPE_IDENTITY() 


        -- Remove all [internal].[executions] dependancies
        TRUNCATE TABLE [internal].[executable_statistics]
        TRUNCATE TABLE [internal].[execution_component_phases]
        TRUNCATE TABLE [internal].[execution_data_statistics]
        TRUNCATE TABLE [internal].[execution_data_taps]
        TRUNCATE TABLE [internal].[execution_parameter_values]
        TRUNCATE TABLE [internal].[execution_property_override_values]


        -- Remove all [internal].[event_message_context] dependancies
        TRUNCATE TABLE [internal].[event_message_context]

        -- Remove all non-dependant tables
        TRUNCATE TABLE [internal].[operation_os_sys_info]
        TRUNCATE TABLE [internal].[operation_permissions]
        TRUNCATE TABLE [internal].[validations]
        TRUNCATE TABLE [internal].[extended_operation_info]

        -- Deal with [internal].[event_messages] and [internal].[operation_messages]
        ALTER TABLE [internal].[event_message_context] DROP CONSTRAINT [FK_EventMessageContext_EventMessageId_EventMessages]
         
        TRUNCATE TABLE internal.event_messages
         
        ALTER TABLE [internal].[event_message_context]  WITH CHECK ADD  CONSTRAINT [FK_EventMessageContext_EventMessageId_EventMessages] FOREIGN KEY([event_message_id])
        REFERENCES [internal].[event_messages] ([event_message_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[event_messages] DROP CONSTRAINT [FK_EventMessages_OperationMessageId_OperationMessage]
         
        TRUNCATE TABLE [internal].[operation_messages]

        ALTER TABLE [internal].[event_messages]  WITH CHECK ADD  CONSTRAINT [FK_EventMessages_OperationMessageId_OperationMessage] FOREIGN KEY([event_message_id])
        REFERENCES [internal].[operation_messages] ([operation_message_id])
        ON DELETE CASCADE

        -- Deal with [internal].[executions]

        ALTER TABLE [internal].[executable_statistics] DROP CONSTRAINT [FK_ExecutableStatistics_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_component_phases] DROP CONSTRAINT [FK_ExecCompPhases_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_data_statistics] DROP CONSTRAINT [FK_ExecDataStat_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_data_taps] DROP CONSTRAINT [FK_ExecDataTaps_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_parameter_values] DROP CONSTRAINT [FK_ExecutionParameterValue_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_property_override_values] DROP CONSTRAINT [FK_ExecutionPropertyOverrideValue_ExecutionId_Executions]

        TRUNCATE TABLE [internal].[executions]

        ALTER TABLE [internal].[execution_property_override_values]  WITH CHECK ADD  CONSTRAINT [FK_ExecutionPropertyOverrideValue_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[execution_parameter_values]  WITH CHECK ADD  CONSTRAINT [FK_ExecutionParameterValue_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[execution_data_taps]  WITH CHECK ADD  CONSTRAINT [FK_ExecDataTaps_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[execution_data_statistics]  WITH CHECK ADD  CONSTRAINT [FK_ExecDataStat_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE
         
        ALTER TABLE [internal].[execution_component_phases]  WITH CHECK ADD  CONSTRAINT [FK_ExecCompPhases_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE
         
        ALTER TABLE [internal].[executable_statistics]  WITH CHECK ADD  CONSTRAINT [FK_ExecutableStatistics_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE
         

        -- Deal with [internal].[operations]
        DECLARE @deleted_ops TABLE(operation_id BIGINT, operation_type SMALLINT)

        DELETE --TOP (@delete_batch_size)
        FROM [internal].[operations] 
        OUTPUT DELETED.operation_id, DELETED.operation_type INTO @deleted_ops
        WHERE operation_id != @operation_id

             
             
        DECLARE @execution_id BIGINT
        DECLARE @sqlString              nvarchar(1024)
        DECLARE @key_name               [internal].[adt_name]
        DECLARE @certificate_name       [internal].[adt_name]
             
             
        DECLARE execution_cursor CURSOR LOCAL FOR 
            SELECT operation_id FROM @deleted_ops 
            WHERE operation_type = 200
             
        OPEN execution_cursor
        FETCH NEXT FROM execution_cursor INTO @execution_id
             
        WHILE @@FETCH_STATUS = 0
        BEGIN
            SET @key_name = 'MS_Enckey_Exec_'+CONVERT(VARCHAR,@execution_id)
            SET @certificate_name = 'MS_Cert_Exec_'+CONVERT(VARCHAR,@execution_id)
            SET @sqlString = 'IF EXISTS (SELECT name FROM sys.symmetric_keys WHERE name = ''' + @key_name +''') '
                +'DROP SYMMETRIC KEY '+ @key_name
                EXECUTE sp_executesql @sqlString
            SET @sqlString = 'IF EXISTS (select name from sys.certificates WHERE name = ''' + @certificate_name +''') '
                +'DROP CERTIFICATE '+ @certificate_name
                EXECUTE sp_executesql @sqlString
            FETCH NEXT FROM execution_cursor INTO @execution_id
        END
        CLOSE execution_cursor
        DEALLOCATE execution_cursor

        END
END TRY
BEGIN CATCH
         
         
    IF (CURSOR_STATUS('local', 'execution_cursor') = 1 
        OR CURSOR_STATUS('local', 'execution_cursor') = 0)
    BEGIN
        CLOSE execution_cursor
        DEALLOCATE execution_cursor            
    END
         
    UPDATE [internal].[operations]
        SET [STATUS] = 4,
        [end_time] = SYSDATETIMEOFFSET()
        WHERE [operation_id] = @operation_id;       
    THROW
END CATCH
     
RETURN 0

 
After running this on the test environment, I found out it worked as I expected. I started the script on production (where the problem was found in the first place), and when it finished I changed the retention from 365 to 14 days:

 

 
After doing that, I ran the “SSISDB Maintenance job” (which ran fine), and an SSIS package as a test (worked fine as well). And looking at the record counts, it worked fine:

 

 
I’m not saying you should run this on your system, but I’m just saying this works fine on my system, and in my specific case. If you find any code on the internet, test it on a non-critical system first, before you run it in production!!!

 
Conclusion
Having error logging is a good thing. It can help you determine problems, and you can see if and how process grow over time. But too much logging for SSIS causes performance issues, prevents you from deploying new packages, and can even cause unexpected termination of SSIS packages. One way to prevent that is by changing the history retention of your SSISDB.

SSIS: Zipping files with WinRAR

One of the strengths of SSIS (SQL Server Integration Services), is that’s it’s really powerful with the components available. But it’s easy to use the “Execute Process Task” to execute external tools. One of these tools can be WinRAR.

 
Zip file task
By default, SSIS doesn’t have a zip file task. There are some 3rd party components, like the one Joost van Rossum (Blog | @ssisjoost) built. You can find his custom task here. But what if you don’t to use 3rd party components? You can create your own zip task, by using an “Execute Process Task”.

 
Variables and Parameters
Because I want to keep this as easy as possible, and I might want to reuse this later in another package, I started by creating 2 parameters and 1 local variable:

 
Variable
Function:
To generating a unique zipfile name per day, I decided to add the date to the zipfile name

Name:
ZipName

Value:
“ZIPPED_” +
(DT_WSTR, 50) YEAR((DT_DBTIMESTAMP) @[System::StartTime]) + “_” +
RIGHT(“00″ + (DT_WSTR, 50) MONTH((DT_DBTIMESTAMP) @[System::StartTime]), 2) + “_” +
RIGHT(“00″ + (DT_WSTR, 50) DAY((DT_DBTIMESTAMP) @[System::StartTime]), 2) +
“.zip”

 
This expression will evaluate to:

 
Parameter 1
Function:
Point the “Execute Process Task” to the folder of the files that need to be zipped

Name:
FilePath

Value:
C:\Temp

 
Parameter 2
Function:
Tell WinRAR which files to zip in which directory

Name:
FilesToZip

Value:
C:\\Temp\\*.txt

 
When you added these Variable and Parameters, you should have this Variable:

 
And this Parameters:

 
Once that is done, you can add an “Execute Process Task” to the Control Flow. When you open the component you just added, go to the executable textbox, and point it to the WinRAR executable on your disk”:

 
Now you need to add the arguments that tell WinRAR what files to zip, where to zip them, and which files need to be zipped. In the “Execute Process Task”, click on Expressions, and on the dotted button:

 
In the next window, select “Arguments” under Property, and press the dotted button again:

 
Now paste the string below in this window:

 
“a -df ” + @[$Package::FilePath] + @[User::ZipName] + ” ” + @[$Package::FilesToZip]

 
If you press the evaluate button, you’ll see this result:

 

 
The command line switches we use are:

a: Archive files (or x for Extract files)
-df: Delete files after archiving

 
So once you know how it’s done, this is pretty easy, and does what it has to: zip your files without any issues. One thing to remember, is that SSIS can’t throw the exceptions WinRAR throws. So I tested this before, and didn’t notice the directory didn’t exist. My SSIS package kept on failing, and I didn’t know why. But after trying the expression we put into the expression textbox of the “Execute Process Task” in a command prompt, I did see the actual error.

So even though it’s easy to use, it’s not always easy to debug when it fails. But as long as you can execute it via command line yourself, you’re just a few steps away from seeing the actual error.

 
If you want to read more about SSIS, don’t forget to check out these blog posts:

Julie Koesmarno: On sabbatical
Mickey Stuewe: SQL Server data transferred to a sqlite database using ssis
Chris Yates: ISPACs – Got To Love Em

T-SQL Tuesday #51 – Place Your Bets

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Place Your Bets”. If you want to read the opening post, please click the image below to go to the party-starter: Jason Brimhall (Blog | @sqlrnnr).



 
When I read about this months T-SQL Tuesday topic, the first thing that came to mind was things that you know will go wrong sooner or later. When you encounter a situation like this, you immediately know this can’t last forever. You want to fix it when you see it, but there’s no money, or there’s no time at that moment. But they promise you, in a few weeks you can take all the time you need. Well, that’ll never happen. Until things go wrong, and you can clean up the mess. Sounds familiar? Yes, we’ve all seen this, or will see this sooner or later.

 
With power comes great responsibility
Just imagine this with me. One of your colleagues asks you to look at a problem he’s having with a script someone in your company wrote. You probably solved it while he was standing right next to you. He watches you solve the problem, and when it’s solved, he walks away with a thousand-yard stare in his eyes. You don’t really think about it when it happens, but it’ll come to you…

A few weeks later, it’s 10 AM and you’re still having your first coffee of the day, the same developer asks you to look at “his script”. Wait, what?! Yes, he watched you work your magic, and that funny language of “Es-Que-El” seemed easy to learn. So he bought himself a “SQL Server for dummies”, learned all he needs to know in only a weekend, and wonders why it took you so long to learn it. From now on, he can write his own scripts, so he doesn’t need you anymore. Except for this last time.

Opening the script scares you: it’s a cursor. But in your frustration and amazement you “fix” the broken script, by refactoring his select statement in the cursor. Because the cursor only collects data, you add a “TOP 10″ clause in the select statement, and run the script as test. Nice, it finishes is 25 seconds. “It will only consume 500 rows” is the last thing you heard him say. You send the guy off, so you can continue your own work.

Later in the day, it’s about 4 PM, you meet the same guy at the coffee machine. He starts a discussion about how he needs a new PC, because the script YOU wrote is slow (see where this is going…?). It’s running for about 4 hours now, while it should only collect about 500 records. I know what you think: that’s impossible. You walk with him to his desk, stop the script, and look at his code. That isn’t the query you looked at this morning. Asking your colleague about it explains it all: he “slightly refactored” the script, because he didn’t need al those weird statements to get him his results. Well, after a fiery discussion of a few minutes, you explain him the DOES need the “FETCH NEXT” in the query, because the query now ran the same statement for only the first record in the select statement you declared for your cursor.

So this funny “Es-Que-El” language, isn’t that easy to learn. A beautiful quote about that, and I’m not sure who said that, says: “T-SQL is easy to learn, but hard to master”. So putting your money on one horse, in this case buying yourself a book, isn’t a good idea.

 
Putting your money on one color
Another great example is a company that had a wonderful Business Intelligence environment. They used the whole nine yards: SQL Server, SSIS, SSAS, SSRS, etc. The downside of that you ask? It was all hosted on 1 physical machine, on a single SQL Server instance. Oh, and it was running low on disk space, and there was no room in the chassis to put in extra disks. That’s right: it was like juggling burning chainsaws with only one hand. Or an interesting challenge, if you will.

Eventually we hosted a few databases on NAS volumes. At that point, I was told the databases we moved were less important. Pro tip: never EVER trust them when they say that!!! They forgot to tell me the biggest database of the moved databases wasn’t in the backup plan (500 GB database takes a long time to backup), and the last backup was made over a year ago. Surprise, one night the network card failed for maybe only a microsecond, and SQL Server thought the LUN was offline or the disk crashed. So SQL Server said that the database was corrupt, and that the datafiles were unavailable. After a few hours, a reboot of the server fixed it, and SQL Server could see the disk volumes again. So the database was saved after all.

But you see where I’m going with this? You never know when things go wrong, and putting all your money on one color when playing roulette isn’t the best idea. If the hardware of your single server fails, you fail.

 
Next, Next, Finish?
But the biggest example I can give you of a bad placed bet, are companies that work with SQL Server, but don’t hire a DBA. Have you ever worked for a company that work with Oracle? Every single company that works with Oracle, has a dedicated Oracle DBA. But have you ever wondered why that isn’t the case when a company works with SQL Server?

Thinking about it, I guess this is because a successful SQL Server installation is only a few “Next, Next, Finish”-mouse clicks away. So if the installation is so easy, every developer or person with IT experience can administer it probably. They couldn’t be more wrong. You know that, I know that, every SQL Server professional knows that, but try to convince other people of that fact.

So the worst bet you can place, and this is how I write myself back to the subject of this month, is not hiring a professional to manage your data and data stores. You wouldn’t let your local baker fix your car, because the wrote some books about cars, right? So why do you let a developer with basic knowledge near your SQL Server? Just because real DBA’s cost money? Yes, we do cost some serious money. But in the end, at least when you hire a GOOD DBA, they will make you money. You don’t think so? What does a DBA cost per hour? And how much money do you lose when your servers are down for just an hour?

SSIS – Remove empty rows from Excel import

From the first time that I started SSIS, I started to love it. In most cases it’s easy to create a package, easy to understand, and even readable for people who don’t “speak fluent SQL”. But what if you want to perform an easy task, and the result isn’t what you expect?

 
Formatting an Excel sheet
One of the most basic tasks you can create in SSIS, is importing an Excel sheet. Most of the time this works like a charm. But in my case, I wanted to filter out some rows from the workbook.

The business delivers an Excel sheet, that needs to be imported into the database. But because they don’t have the technical knowledge we have, they don’t know how important the format of the file is. They sent us this file (I’ve created a smaller sample, so it’s easier to read and understand):

 
The first thing you’ll notice as a data-professional is the 2 empty rows in the sheet. Beside that, we have an employee without a name. You know this is going to cause problems when you see it. These errors are easy to spot in the example, but imagine if these 2 rows are hidden in a dataset with 50.000 or more rows. So even though they might ended up there accidentally, your process is going to fail.

 
When you add an “Excel Source” to your package, and you look at the preview of that import, you immediately see the problem:

 
Table structure
In order to determine what columns can be left blank, and what columns can’t be NULL, I looked at the table structure:

CREATE TABLE ResultSSIS
  (ID INT IDENTITY(1, 1),
   FullName VARCHAR(50) NOT NULL,
   Department VARCHAR(50) NULL,
   EmployeeNumber INT NOT NULL)

 
So in the dataset, FullName and EmpolyeeNumber are mandatory, and Department is optional. With this in mind, I started to work on a way to exclude those rows.

 
Import without excluding
The first thing I tried is to import the file, and see what the results are. Because I knew the data wasn’t correct, I didn’t want to import the Excel sheet into a SQL Server database just yet. So as a destination, I used the “recordset destination” control in SSIS. Importing the data into this memory table also allowed me to use the “data viewer” to see the imported data, without the need to truncate a table after each run. You can enable the “data viewer” by right-clicking the import-connector (the arrow between controls), and click “Enable Data Viewer”:

 
If you run the SSIS package in debugging mode, you’ll see the data that is imported in a pop-up window:

 
As you can see in the screenshot above, the records with NULL values in it are included in this import. So which records do we want to exclude, based on our table structure?

 
So from the 6 records in the Excel sheet, we want to exclude 3 in our import because of NULL values. But how do we do that? The easiest way to solve it, is to import it into a temp table, delete the NULL records, and insert the other records in the destination table. But what if that isn’t possible, and you want to filter the records in your import? I’ve chose to use the “Conditional Split”.

 
Conditional Split
You don’t have to rebuild your whole package, when you want to exclude records with the “Conditional Split”. You can just add this control, at least in this case, in between you source file and your destination. If you open the control, you can add an expression that is used to filter records. In my case, I wanted to exclude the rows with an empty “FullName” and “EmployeeNumber”:

 
When connecting your “Conditional Split” to your destination, SSIS will ask you what output the “Conditional Split” needs to return. To output the entire set without the empty rows, chose the “Conditional Split Default Output”:

 
When you run your package with the extra “Conditional Split” (and you enable Data Viewer again), you’ll see the filtered output of the “Conditional Split”. The 3 NULL records are excluded like expected:

 
Conclusion
SSIS is easy to use, and yet a a really powerful tool. Even if you build your processes in SSIS, it’s not always necessary to rebuild your whole package. Sometimes you can save the day with just a minor change. That’s the power of SSIS!

Backup and relax?

Keeping a good backup strategy is a must for every DBA and database developer. But just creating a backup isn’t enough. Perhaps you don’t have enough storage to store the full backups of your database for over a week. Or taking a full backup of a database takes so long, it’s only possible on weekends. So what are your options?

 
RPO and RTO
Your whole backup strategy starts by determining the RPO (Recovery Point Objective) and RTO (Recovery Time Objective). The great Brent Ozar (Blog | @BrentO) wrote a great blog post about these terms, and explains what they mean.

Basically it means that you need to determine what maximum data loss is allowed, and from there you start creating a backup strategy. So how do you determine these? This is how RPO and RTO look like if you visualize them:

 
Storage
Another thing you want to consider is the storage available for your backups. Most of the time the backups will be stored on a NAS (Network-attached storage), and not on the local server, so storage isn’t a big issue in that case.

I’ve also seen companies that created the backup on the local server, and after completion copied it to a network location. In my opinion it’s only one more dependency that you could prevent, but other than that it’s a valid option.

 
Backup types
SQL Server supports multiple backup options. They all have their pros and cons, and give you the ability to create a backup strategy that fits your needs. In this blog post, I’m assuming the database that we work with is created as a full recovery model database.

 
Full backup
A full backup takes a backup of the entire database. With this backup file you’ll be able to recover the entire database, without the need of extra log files. Creating a full backup can take more time, depending on the size of the database. Let’s visualize this with an example:

 
Looking at the example you’ll see that every night a full backup is created. But on Friday night 8 PM the database crashes, and we need to recover from backup. The last full backup was taken 20 hours ago, so those 20 hours of data changes are lost.

 
Differential backup
If you have less time to spend on backing up your database every night, one of your options is to take a differential backup. A differential backup only backs up data that is changed since the last full backup. A differential backup can’t be created without taking a full backup first. If you try to create it without a full copy of the database, SQL Server will throw an error:

 
When you create a full backup, a backup chain is started. This means that SQL Server registers which LSN (Log Sequence Number) was added to the last backup. When you take the next backup, the backup will contain all transactions from the last LSN of the previous backup until the time of your new backup.

A backup chain can’t be started with a differential backup. Also, when you want to restore a differential backup, you need the full backup it’s based on. To put this into perspective, look at the visualization below:

 
At midnight on Monday we create a full backup of the database. Every other day we create a differential backup at midnight. On Friday at 8 PM the database crashes, and we need to restore a backup. All we need is the full backup from Monday morning, and differential backup 4. Although it takes less time to create a differential backup, you see that this hasn’t helped you much. You still lost 20 hours of data.

 
Transaction Log backup
The last major type of backup is a transaction log backup. This backup contains all transactions that were executed after the last full or differential backup were created. This gives you the opportunity to perform the so called “point-in-time recovery”.

Just like the differential backup can’t be created without a full backup, a transaction log backup can’t be created without a full or differential backup first. So the transaction log backup can’t be used to start a backup chain. Let’s take the same example, and add a transaction log backup every 15 minutes (the blue lines represents the transaction log backups):

 
If the database crashed on the same time as the previous examples, your data loss is slimmed down from 20 hours to a maximum of 15 minutes. In order to recover your database, you need the Full backup created on Monday, the differential backup created on Friday, and all transaction log backups created after the differential backup at midnight. So if the database crash occurs a few seconds before the next transaction log backup, the maximum data loss is 15 minutes. Again, without a full or differential backup you can’t create a transaction log backup:

 
Pitfalls
Whenever you create a database, backup that database, and throw away that backup file, you can create a differential or transaction log backup. SQL Server doesn’t require the last full or differential backup (in case of a transaction log backup) to be present. So remember to always check if there is a valid backup available, either on the server or on your backup location.

 
Backup compression
From SQL Server 2008 onward, you can use a new feature called Backup Compression. Whether or not you’re compressing your backup can also make a big difference in performance. A compressed backup is smaller, so it requires less I/O when created, and can increase backup speed significantly. On the other hand, compressing a backup increases CPU usage. So it’s a tradeoff you need to consider. But in some cases, this could solve the problem of having a shortage on storage.

 
Files and Filegroups backup
Creating a file or filegroup backup can be practical when performance or database size is an issue for you. Perhaps taking a backup of your 500GB databases takes to long, and you need to consider other options.

You can backup all filegroups separately, but it’s also possible to combine a number of filesgroups in a single backup. This makes it easier to balance your backups over several disks when you’d like to. But perhaps it’s easier to create a backup that consists of multiple files. This can be achieved by adding more destination files at the bottom of the “create a backup”-GUI. SQL Server than balanced the data across the files you added.

Adding more destination files to your backup can also increase performance. Jes Schultz Borland (Blog | @grrl_geek) wrote a great article about that. She tested several options to see what the impact on performance is.

 
Copy-only backup
One of the most important backup options (in my opinion) is the copy-only backup. This allows you to create an ad-hoc backup without breaking the backup chain.

A copy-only backup works independently from any previous backup or backup plan. So a copy-only backup will not take the last LSN into account, or store the last LSN added to the copy-only backup you’re creating. So if you have a backup plan in place, and you or one of your colleagues needs to create an ad-hoc backup, copy-only is the way to go.

 
Now can we relax?
The short answer is: NO! Your work has only just begun. Now that you have a backup strategy, you need to build it, test it, tune it, and cherish it. Creating the perfect backup strategy isn’t a silver bullet. Your databases change, your processes change, your colleagues change…

So when is the last time you tried to restore a backup from your automated process? You can’t remember? Time to get it done than! You know what they say: A DBA is only as good as his last restore. So if you want to keep working as a DBA for your company, start preparing a test restore now.

 
And then…?
Once you’ve created a backup strategy, the hard work just begins. How are you implementing your backups? Are you planning on using the default SQL Server maintenance plans? Are you building something yourself with SQL Server Agent Jobs and SSIS packages? Maybe you want to buy a solution from a specific vendor you like or know? Well, what about a free solution?

If you’re looking for a cheap way out, building it yourself is a good option. But why not look at the completely free solution by the new MVP Ola Hallengren (Website | @olahallengren)?

It’s a solution used by many of our community members, and won a lot of prizes over the years. Not sure if it’s safe to use? Why don’t you look at the list of companies that use his solution, or read the blog post of Jonathan Kehayias (Blog | @SQLPoolBoy) about it.

Another great resource to start from is the TechNet page about backups. This contains a lot of information about the techniques behind the backup process, and the possible pitfalls you’ll encounter.

 
Conclusion
When creating a backup strategy, you need to take a lot of factors into account. What kind of hardware are you working with? Is the storage you need available? Is it possible to create a full backup every night, or only on weekends?

After building your (custom) solution, you need to spend time on tuning and maintaining it. Your databases aren’t static, and will change every second, every minute, every day. So keep changing your process to perform at it’s best, and in the end, you will create your own free time to spend on cool things.

Moving files with SSIS

In the last few weeks, I’ve seen the ugly side of being a DBA: Server maintenance. Most of the time this is only needed when things go wrong, checking the backups, etc. But last week I had something that makes every DBA on earth shiver: diskspace shortage.

One of the ETL processes that runs on our server, downloads a bunch of XML files from an external source, and stores those on local disk. After that, the XML files are imported into a database, so our business analysts can use the content for their analysis. After the import, the XML files are zipped, and stored in a folder called “Backup”. But because the files are never deleted after the imported (they might want to audit the data and files), the disk they were stored on was running low on diskspace.

 
Thinking about the options
As a DBA, it’s your job make your own life as easy as possible. So moving the files by hand is okay if you only need to do it once. But I’d rather solve this once and for all, because maybe next time the disk will run out of space in the middle of the night, instead of during the day like this time.

So to prevent this issue in the future, I’ve created an SSIS solution to run directly after the import. This SSIS package moves the files from local disk to SAN storage. This way, we’re getting rid of the local files, but the files are still available when needed.

 
Creating your test environment
The first thing I did was create a copy of the directory structure that exists on the server, on my local machine. I’ve just copied the structure, and put a single .txt file in every directory. You can download the “test environment” setup here. Unzip the contents of the folder to “C:\Temp”, so you can use the SSIS package like described below, without making changes to it.

 
Creating the package
I started by creating a new package, and added a “Foreach Loop Container” for every folder that contains XML- and text-files. In every “Foreach Loop Container” I’ve placed a “File System Task”. After that, I’ve created the variables needed for this package:

Some of the variables are created just for the scope of the corresponding “Foreach Loop Container”. So if you want to see all variables in one overview, click the “Show All Variables” button that I’ve marked in the image above.

 
Configuring the components
Now you can start configuring the components. The “Foreach Loop Container” needs to be configured like this:

And:

And the “File System Task” needs to be configured like this:

You can also download the SSIS package as shown above, directly here (save the XML as dtsx file).

 
Test run
Now you’re ready for a test run! If you didn’t download the directories and files (the “test environment”) above, make sure you create all the destination directories. If you don’t, the package will fail. I didn’t build it into this version, but you might want to add a feature like that to the package.

Follow

Get every new post delivered to your Inbox.

Join 59 other followers