Published CDC article

A few years ago, I made a list of goals I wanted to work towards, and things I wanted to achieve. One of those things was speaking (which I achieved in January 2015). Another point on the list was to write an article for a published magazine. Today, I’m proud to announce I can cross that goal off my list as well!

In December 2014, I saw a tweet from Marcel Meijer (Blog | @MarcelMeijer ), asking who wanted to write an article for SDN Magazine. After thinking about it for a while, I decided to just go for it.

Earlier this week, I saw the magazine was available for download on the SDN website. I’m honored my article was published, and I can only hope the people who read the magazine like it.

If you read the article, I’d love to hear your opinion!

Avoid a big SSISDB by logging less…?

Last week I blogged about how logging can grind your SSIS to a halt. After posting the blog, I got a really interesting reaction from Koen Verbeeck (Blog | @Ko_Ver):

I knew there were different logging levels in SSIS, but I couldn’t recall the difference in levels. So I discussed about the difference with Koen. At first glance we thought that the biggest difference is the fact that performance logging only logs warnings and errors. So basically, you decrease the amount of messages logged, which should results in a slightly better performance. But is that really the case?

 
Different logging levels
In SSIS there are 4 different logging levels. So what are the options, and what’s the difference between them? You can read all about it on MSDN, but the short version is:

None: Logging is turned off
Basic (default value): All evens are logged, except custom- and diagnostic events
Performance: Only performance statistics, OnError and OnWarning events are logged
Verbose: All events are logged

 
The next questions is: where to change these options. For example, how do I change my logging level to performance?

 
SSISDB / Catalog
When you want to change the logging level for all your deployed SSIS packages, you could change the setting on your catalog. You can do that by opening the “Integration Services Catalogs” on your instance, right-click on your catalog, and choose an option for “Server-wide Default Logging Level”:

 
If you change this, the logging level for all packages in this catalog will change. But maybe you don’t want that, and you only want it for a specific package.

 
SQL Server Agent Job
Another option is do configure this in your SQL Server Agent job:

 
If you use this option, the logging level will change for the package that is executed in the job step. So no permanent change, but only for the duration of the job.

 
At package execution
Another option is to do it at package runtime:

 
Personally I don’t use that option of executing packages, but you might. The same applies as the change on the SQL Server Agent job, this will only change the option for the duration of the execution, so no permanent change.

 
How to determine your best choice
Making a good decision is difficult in this case. When something goes wrong, you want to log everything. But when things run smoothly, it’s a waste of time to log every event that passes by in SSIS. And how do you determine the level of logging you need, without knowing what data is in your SSISDB? If you want to analyze that, you could use a query like this:

SELECT
  EventCounts.EventName,
  EventCounts.NumberOfEvents,
  CONVERT(FLOAT,(CONVERT(FLOAT,[NumberOfEvents]) / SUM(NumberOfEvents) OVER ()) * 100) AS 'Percentage'
FROM
  (
    SELECT
      EM.event_name AS 'EventName',
      COUNT(*) AS 'NumberOfEvents'
    FROM SSISDB.catalog.event_messages AS EM
    WHERE EM.event_name IS NOT NULL
    GROUP BY EM.event_name
  ) EventCounts
ORDER BY EventCounts.EventName ASC

 
This shows you the type of events stored in your SSISDB, the amount of events, and a percentage over the whole dataset. This can help you determine the logging level you need in your specific case.

 
But here’s the catch…
Performance logging doesn’t actually make your packages run faster… Koen sent me this blog post from Matt Masson (Blog | @mattmasson). In his blog post, he explains what events are logged at the specific levels. And this is what he says about performance level:

The Performance log level should be used when you are doing benchmarking and performance tuning for your packages. While it actually logs less messages to the [catalog].[operation_messages] view than Basic, it captures a lot more events internally to analyze the performance of the data flow components. As a result, there is a bit more overhead during execution – packages run with Basic will actually run a little faster than Performance (in this case Performance means “give me all of the performance details”, not “run my packages as fast as you can”).

 
Conclusion
Even though SSIS is easy to use, there are some pros and cons. The longer I work with SSIS (and that’s not on a daily basis), the more of these pitfalls I discover. And even though they can be fixed pretty fast, it normally takes you time to figure out the problem, because I’m not that familiar with the inner-workings of SSIS. And even when you think you have found a solution for your problem, SSIS just works slightly different than expected. I’m just glad I have friends like Koen to help me out when needed! Thanks again Koen!

SSISDB: Why bigger isn’t always better

Two weeks ago I encountered some strange issues with the SSISDB. Packages and processes started failing, and all we could find is this error message:

 
The image above is an extract of the standard SSMS “All Executions” report. You can find this by right-clicking your SSISDB, click on Reports, then Standard Reports, and choose the “All Executions” report.

While the packages started failing, we couldn’t find a cause. Deploying new packages wasn’t possible either. There were no events logged in the report, in the Windows event viewer, or in the SQL Server error log. So there was no starting point to further investigate this issue.

 
Finding the cause
Checking the size of the SSISDB showed some possible cause of the issue: it was 72GB big! This seemed odd, because we don’t store a lot of packages in the SSISDB. I knew SSISDB contained some logging information, but I didn’t knew how much. Checking the “Disk usage per table” report (another standard report in SSMS), the problem became a bit more obvious:

 

 
So basically there were over 67 million records in the database! This shouldn’t have happened, because the log retention should be set so 14 days. But we quickly found the problem:

 

 
The retention was set to 365 days. This could only mean the the SSISDB was re-created, and this setting wasn’t changed after the deploy. This resulted in a lot of extra logging data in the database. Because of the amount of record, the standard “SSISDB Maintenance job” ran for 40+ minutes, instead of a maximum of 8 minutes (which it normally needs to clean up logging). But setting the retention to 14 days and running the maintenance job would just result in an endless running job. So how could you prevent that?

 
Grabbing the bigger hammer
In most cases, grabbing a bigger hammer to solve the problem isn’t your best option. In this case, I didn’t see another way. After some searching, I found this blog post about truncating the log tables in SSISDB. Below you find the copy of the script. The credits for this script go to Ibrahim Naji (Blog | @thinknook).


CREATE PROCEDURE [internal].[cleanup_server_retention_window_truncateall]
AS  
     
SET NOCOUNT ON
     
DECLARE @enable_clean_operation bit
DECLARE @retention_window_length INT
     
DECLARE @caller_name nvarchar(256)
DECLARE @caller_sid  varbinary(85)
DECLARE @operation_id BIGINT
     
EXECUTE AS CALLER
    SET @caller_name =  SUSER_NAME()
    SET @caller_sid =   SUSER_SID()
REVERT
          
     
BEGIN TRY
    SELECT @enable_clean_operation = CONVERT(bit, property_value) 
        FROM [catalog].[catalog_properties]
        WHERE property_name = 'OPERATION_CLEANUP_ENABLED'
         
    IF @enable_clean_operation = 1
    BEGIN
        SELECT @retention_window_length = CONVERT(INT,property_value)  
            FROM [catalog].[catalog_properties]
            WHERE property_name = 'RETENTION_WINDOW'
                 
        IF @retention_window_length <= 0 
        BEGIN
            RAISERROR(27163    ,16,1,'RETENTION_WINDOW')
        END
             
        INSERT INTO [internal].[operations] (
            [operation_type],  
            [created_time], 
            [object_type],
            [object_id],
            [object_name],
            [STATUS], 
            [start_time],
            [caller_sid], 
            [caller_name]
            )
        VALUES (
            2,
            SYSDATETIMEOFFSET(),
            NULL,                     
            NULL,                     
            NULL,                     
            1,      
            SYSDATETIMEOFFSET(),
            @caller_sid,            
            @caller_name            
            ) 
        SET @operation_id = SCOPE_IDENTITY() 


        -- Remove all [internal].[executions] dependancies
        TRUNCATE TABLE [internal].[executable_statistics]
        TRUNCATE TABLE [internal].[execution_component_phases]
        TRUNCATE TABLE [internal].[execution_data_statistics]
        TRUNCATE TABLE [internal].[execution_data_taps]
        TRUNCATE TABLE [internal].[execution_parameter_values]
        TRUNCATE TABLE [internal].[execution_property_override_values]


        -- Remove all [internal].[event_message_context] dependancies
        TRUNCATE TABLE [internal].[event_message_context]

        -- Remove all non-dependant tables
        TRUNCATE TABLE [internal].[operation_os_sys_info]
        TRUNCATE TABLE [internal].[operation_permissions]
        TRUNCATE TABLE [internal].[validations]
        TRUNCATE TABLE [internal].[extended_operation_info]

        -- Deal with [internal].[event_messages] and [internal].[operation_messages]
        ALTER TABLE [internal].[event_message_context] DROP CONSTRAINT [FK_EventMessageContext_EventMessageId_EventMessages]
         
        TRUNCATE TABLE internal.event_messages
         
        ALTER TABLE [internal].[event_message_context]  WITH CHECK ADD  CONSTRAINT [FK_EventMessageContext_EventMessageId_EventMessages] FOREIGN KEY([event_message_id])
        REFERENCES [internal].[event_messages] ([event_message_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[event_messages] DROP CONSTRAINT [FK_EventMessages_OperationMessageId_OperationMessage]
         
        TRUNCATE TABLE [internal].[operation_messages]

        ALTER TABLE [internal].[event_messages]  WITH CHECK ADD  CONSTRAINT [FK_EventMessages_OperationMessageId_OperationMessage] FOREIGN KEY([event_message_id])
        REFERENCES [internal].[operation_messages] ([operation_message_id])
        ON DELETE CASCADE

        -- Deal with [internal].[executions]

        ALTER TABLE [internal].[executable_statistics] DROP CONSTRAINT [FK_ExecutableStatistics_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_component_phases] DROP CONSTRAINT [FK_ExecCompPhases_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_data_statistics] DROP CONSTRAINT [FK_ExecDataStat_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_data_taps] DROP CONSTRAINT [FK_ExecDataTaps_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_parameter_values] DROP CONSTRAINT [FK_ExecutionParameterValue_ExecutionId_Executions]
        ALTER TABLE [internal].[execution_property_override_values] DROP CONSTRAINT [FK_ExecutionPropertyOverrideValue_ExecutionId_Executions]

        TRUNCATE TABLE [internal].[executions]

        ALTER TABLE [internal].[execution_property_override_values]  WITH CHECK ADD  CONSTRAINT [FK_ExecutionPropertyOverrideValue_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[execution_parameter_values]  WITH CHECK ADD  CONSTRAINT [FK_ExecutionParameterValue_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[execution_data_taps]  WITH CHECK ADD  CONSTRAINT [FK_ExecDataTaps_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE

        ALTER TABLE [internal].[execution_data_statistics]  WITH CHECK ADD  CONSTRAINT [FK_ExecDataStat_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE
         
        ALTER TABLE [internal].[execution_component_phases]  WITH CHECK ADD  CONSTRAINT [FK_ExecCompPhases_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE
         
        ALTER TABLE [internal].[executable_statistics]  WITH CHECK ADD  CONSTRAINT [FK_ExecutableStatistics_ExecutionId_Executions] FOREIGN KEY([execution_id])
        REFERENCES [internal].[executions] ([execution_id])
        ON DELETE CASCADE
         

        -- Deal with [internal].[operations]
        DECLARE @deleted_ops TABLE(operation_id BIGINT, operation_type SMALLINT)

        DELETE --TOP (@delete_batch_size)
        FROM [internal].[operations] 
        OUTPUT DELETED.operation_id, DELETED.operation_type INTO @deleted_ops
        WHERE operation_id != @operation_id

             
             
        DECLARE @execution_id BIGINT
        DECLARE @sqlString              nvarchar(1024)
        DECLARE @key_name               [internal].[adt_name]
        DECLARE @certificate_name       [internal].[adt_name]
             
             
        DECLARE execution_cursor CURSOR LOCAL FOR 
            SELECT operation_id FROM @deleted_ops 
            WHERE operation_type = 200
             
        OPEN execution_cursor
        FETCH NEXT FROM execution_cursor INTO @execution_id
             
        WHILE @@FETCH_STATUS = 0
        BEGIN
            SET @key_name = 'MS_Enckey_Exec_'+CONVERT(VARCHAR,@execution_id)
            SET @certificate_name = 'MS_Cert_Exec_'+CONVERT(VARCHAR,@execution_id)
            SET @sqlString = 'IF EXISTS (SELECT name FROM sys.symmetric_keys WHERE name = ''' + @key_name +''') '
                +'DROP SYMMETRIC KEY '+ @key_name
                EXECUTE sp_executesql @sqlString
            SET @sqlString = 'IF EXISTS (select name from sys.certificates WHERE name = ''' + @certificate_name +''') '
                +'DROP CERTIFICATE '+ @certificate_name
                EXECUTE sp_executesql @sqlString
            FETCH NEXT FROM execution_cursor INTO @execution_id
        END
        CLOSE execution_cursor
        DEALLOCATE execution_cursor

        END
END TRY
BEGIN CATCH
         
         
    IF (CURSOR_STATUS('local', 'execution_cursor') = 1 
        OR CURSOR_STATUS('local', 'execution_cursor') = 0)
    BEGIN
        CLOSE execution_cursor
        DEALLOCATE execution_cursor            
    END
         
    UPDATE [internal].[operations]
        SET [STATUS] = 4,
        [end_time] = SYSDATETIMEOFFSET()
        WHERE [operation_id] = @operation_id;       
    THROW
END CATCH
     
RETURN 0

 
After running this on the test environment, I found out it worked as I expected. I started the script on production (where the problem was found in the first place), and when it finished I changed the retention from 365 to 14 days:

 

 
After doing that, I ran the “SSISDB Maintenance job” (which ran fine), and an SSIS package as a test (worked fine as well). And looking at the record counts, it worked fine:

 

 
I’m not saying you should run this on your system, but I’m just saying this works fine on my system, and in my specific case. If you find any code on the internet, test it on a non-critical system first, before you run it in production!!!

 
Conclusion
Having error logging is a good thing. It can help you determine problems, and you can see if and how process grow over time. But too much logging for SSIS causes performance issues, prevents you from deploying new packages, and can even cause unexpected termination of SSIS packages. One way to prevent that is by changing the history retention of your SSISDB.

SQLUGBE Session – Resources

Last week I presented a session to the SQL Server User Group Belgium. The session about CDC was my first ever UG presentation, and it was exciting and scary at the same time. But I was glad there were some familiar faces in the audience. Not only Pieter Vanhove (Blog | @Pieter_Vanhove) was attending, but also Koen Verbeeck (Blog | @ko_ver) and Johan Bijnens (@alzdba). This made it a bit easier to talk to the group of attendees (about 25), because I knew I could use their help if things didn’t go as planned. But the session went okay. I felt a bit too nervous, talked too fast, and didn’t really watch the time. All rookie mistakes of course. There was a 2,5 hour timeslot reserved, and in the end I talked about CDC for 2 hours and 15 minutes! Surprisingly, it didn’t feel like that at all! Sorry to all the attendees for that!

A few attendees came by after the session, and told me their opinion on the session. That first feedback was better than I could imagine. The rest of the feedback will be send to me later, so hopefully that feedback will be as good as the first feedback I got. This experience convinced me I want to look for the next challenge. A new event, a different audience, and new topic(s).

For all the attendees (and the ones who just want to check out the slides and demos), I shared the resources:

 
Slide deck (pdf)
Slide deck + demos (zip)

 
Thanks again to all attendees, thank you SQL Server User Group Belgium for organizing the session, and thank you Kohera for sponsoring the session!

 
 
UPDATE:
The evaluation just came in, and it’s better than I could ever expect!

Question: Fulfilled this evening your technical expectations?
Score: 7.8 / 9

Question: How do you evaluate presentation skills?
Score: 7.56 / 9

Question: How do you evaluate presenters knowledge?
Score: 8.2 / 9

Looking back at 2014, and forward to 2015

The last year was a little bumpy, and had a lot of ups and downs. Looking back at 2014, I accomplished a lot of cool things, and set some things in motion for 2015 already. So what happened for me in 2014?

 
FORG
2014 was the first year I joined the Friends of Red Gate (FORG) program. I was surprised and felt honored, that I got the confirmation in February that I was a part of FORG. Red Gate has always been a company I respected a lot (so much, that some people asked if I had Red Gate stocks, when I mentioned them again in a conversation). The fact that they let me in their program was a huge honor for me. They enabled me to contact the product teams directly, which (hopefully) ended up in us (FORG members and Red Gate developers) improving the tooling this year.

For next year, I’m hoping I can continue working with Red Gate. There are a lot of things I would like to do in cooperation with Red Gate, that I didn’t or couldn’t do this year. Hopefully 2015 will be the year that I can do some of the things I have on my to-do list!

 
SQLCoop
In February of 2014, I started SQLCoop with 3 other people:

Julie Koesmarno (Blog | @MsSQLGirl)
Mickey Stuewe (Blog | @SQLMickey)
Chris Yates (Blog | @YatesSQL)

 
We wrote a total of 8 posts this year, in which we talked about a number of subjects. Hopefully for next year, there will be more posts about various subjects.

 
Blogging
My blogging in 2014 didn’t go exactly as planned. At the end of last year, I set myself a goal: try to blog more often, and try to write a blog post once a week. I really tried to keep up with that pace, but unfortunately I lost that pace early in the year. But I did manage to publish 30 posts this year (including this one). This is one more than last year. So I did reached my goal (blog more than last year), but not as much as I would’ve liked.

 
Blog traffic
The blog traffic increased again, in a way I didn’t expect. Between 2012 and 2013, the traffic to this blog was quadrupled. This was a stunning success, and I didn’t expect any growth for this year. But looking at the statistics right now, I see that the visitors to my blog are more than doubled comparing to last year! I was hoping for a small increase, but not this much! Looking at that, I can only hope I helped a lot of people save time, with the ramblings I call blog posts!

 
Events
This was the year of the great events. In July, I attended SQLBits in Telford, UK. This was a great event, wonderfully organized, and certainly an event everyone needs to visit at least once! People told me this was the biggest and nicest SQL event in Europe, and I can tell you: they weren’t lying! What an event, what an attendees, and what a great job from the organizers!

In October there was SQL Saturday Holland. This year was (again) better than last year, and it was a great event! Also, I got the chance to volunteer at the event. Definitely something I’m doing again next year. It was really great to help the organizers out. It was a wonderful day with great sessions, and it was the moment I got my first speaking opportunity. Nothing but good things happened that day.

 
New friends
This year I’ve also met a number of wonderful people from the community. A lot of them I met at SQLBits this year, but also at SQL Saturday. It was great to meet Matan, Tobiasz, Chris, Julie, Nicky, Pieter, Brent, Kevin, Grant, etc, or talk to them again. Meeting all those people really shows how big and diverse the community is, and it reminds me why I like this community so much. I’m confident I will meet a lot of cool people next year, and catch up with people again after not seeing them for too long.

 
Speaking
This year I started to seriously think about speaking at events and user groups. All the good stories I heard from friends in the community made me consider this. In July during SQLBits, I submitted 3 sessions for SQL Saturday Holland, and that was the first step. I wasn’t surprised that I didn’t get picked. There were a lot of good speakers, and I was a newcomer with no experience. But at SQL Saturday Holland, I met Pieter Vanhove (Blog | @Pieter_Vanhove). We talked about all different kinds of subjects (including speaking), and he invited me to do a session at the SQL Server User Group Belgium in January 2015. This means I’m speaking at my first ever User Group session. Hopefully this will be the start of something good, with which I can give back to the community in time.

 
I want to with you all a very happy, successful and interesting new year! Go and try to make 2015 your year, and take that leap of faith when you have the chance! I know I did! :)

Speaking at SQL Server User Group Belgium

I am extremely proud and thrilled to announce that I will be speaking at the SQL Server User Group Belgium on the 15th of January 2015. The session is called “Speeding up your ETL with CDC”. In this session you’ll get an introduction to CDC, a look at why CDC is your best option, we’re going to look at the pros and cons of CDC, and see how easy it might be to implement it in your company.

About 2 months ago, I met Pieter Vanhove (Blog | @Pieter_Vanhove) at SQL Saturday Holland. We started talking about speaking at events, how to get started, and how I would like to try it out someday. Then Pieter asked me if I would be interested in speaking at SQL Server User Group Belgium. I said yes instantly!

I want to thank Pieter and the SQL Server User Group Belgium for this opportunity!

This will be my first user group session, and I’m not sure what to expect. I’m so glad I got so many tips from great friends in the community (thank you all!!!), and these tips will help me a lot. Let’s hope all works out, and that I can make my first event a success!

The event details can be found on the SQLUG.be website, where my session is announced. You can register here.

And thank you Kohera for sponsoring the event!

SQL Sentry Plan Explorer: You can’t live without it

Every data professional out there will run into slow running queries, or performance issues you can’t explain at some point. At that moment, it’s difficult to explain the problem without looking at an execution plan. SQL Server Management Studio (SSMS) has build-in functionality to look at these execution plans. But this isn’t always as useful as we would like it to be. But there is a great free tool that’ll help you with query-tuning and pinpointing the issue in bad performing queries.

 
Download
SQL Sentry Plan Explorer is free, and available on the website of SQL Sentry. Even though it says it’s a trial version, it won’t expire after a certain period. The only thing that’s “trial” in this version, is that some functionality is blocked in the free version. But all the good stuff is available in the free version.

 
Integration in SSMS
When you start the install, the install doesn’t ask you to shut down SSMS. But I recommend you do. If you don’t close SSMS, you won’t see the SSMS add-in menu. It will show after the setup is finished, and you start a new instance of SSMS.

 
Creating a query, and opening it in Plan Explorer
As an example, I’ve created a really bad query on the Adventureworks2012 database:

USE AdventureWorks2012
GO


DECLARE @MinPrice INT = -1;


WITH Shipping AS
(
SELECT
  PV.ProductID AS ProductID,
  UM.Name AS ShippingPer,
  CASE
    WHEN UM.Name = 'Each' THEN PV.StandardPrice
    WHEN UM.Name = 'Dozen' THEN PV.StandardPrice / 12
    ELSE @MinPrice
  END AS ShippingCostPerUnit
FROM Purchasing.ProductVendor AS PV
INNER JOIN Production.UnitMeasure AS UM ON UM.UnitMeasureCode = PV.UnitMeasureCode
)


SELECT
  P.ProductID,
  P.ProductNumber,
  P.Name,
  S.ShippingCostPerUnit,
  Quantity.TotalQuantity,
  P.ListPrice,
  dbo.ufnGetProductListPrice(P.ProductID, GETDATE()) AS XYZ,
  Locations.TotalLocations,
  P.ListPrice + S.ShippingCostPerUnit AS TotalCostProduct,
  Quantity.TotalQuantity * P.ListPrice AS TotalValueStock,
  ((Quantity.TotalQuantity * P.ListPrice) / Locations.TotalLocations) AS AverageValuePerLocation
FROM Production.Product AS P
INNER JOIN Shipping AS S ON S.ProductID = P.ProductID
CROSS APPLY
(
  SELECT SUM(Quantity) AS TotalQuantity
  FROM Production.ProductInventory
  WHERE ProductID = P.ProductID
  GROUP BY ProductID
) AS Quantity
CROSS APPLY
(
  SELECT COUNT(LocationID) AS TotalLocations
  FROM Production.ProductInventory --WITH(INDEX(0))
  WHERE ProductID = P.ProductID
) AS Locations
WHERE P.ListPrice <> 0
ORDER BY P.ProductID, P.ProductNumber, P.Name, TotalLocations ASC

 
If you run this query in SSMS, and you include the actual execution plan (Ctrl + M), it will show you the execution plan in a separate result window. In this window, you’ll have the option to right-click, and choose “View with SQL Sentry Plan Explorer”:

 
If you click this, you’ll open Plan Explorer, and it will show you the execution plan:

 
So, is that all?
I can almost hear you think: So what’s the difference between Plan Explorer and the default SSMS windows, besides the fancy colors? Just take a look at all the extra opportunities you get with Plan Explorer. For example, how does your join diagram look? Can you pull that from SSMS? No? Well I can do that with Plan Explorer:

 
Your most expensive operation in the query? Yes, you could do that by looking at the percentages shown in your queryplan. But can you show me why they are that expensive? Again, I can do that with Plan Explorer:

 
Can you do you job without it?
If I ask myself this question, I think I can honestly answer this with: yes. Yes, I can do my job without it. But this makes it SO much easier to pinpoint the problem, and to get a quick overview of the query performance. Normally I look at the queryplan in SSMS first, and then immediately open up a Plan Explorer window, to take a closer look at the problems.

So if you write queries on a daily basis, and you’re responsible for, or interested in, qery performance: download it today, and try it out yourself. I’ll promise you, you won’t regret downloading it!
If you want to read more about SQL Sentry Plan Explorer, don’t forget to check out these blog posts:

Julie Koesmarno: Analysing Execution Plans With SQL Sentry Plan Explorer
Mickey Stuewe: On sabbatical
Chris Yates: SQL Sentry Plan Explorer – Don’t Leave Home Without It

Follow

Get every new post delivered to your Inbox.

Join 59 other followers