T-SQL Tuesday #51 – Place Your Bets

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Place Your Bets”. If you want to read the opening post, please click the image below to go to the party-starter: Jason Brimhall (Blog | @sqlrnnr).



 
When I read about this months T-SQL Tuesday topic, the first thing that came to mind was things that you know will go wrong sooner or later. When you encounter a situation like this, you immediately know this can’t last forever. You want to fix it when you see it, but there’s no money, or there’s no time at that moment. But they promise you, in a few weeks you can take all the time you need. Well, that’ll never happen. Until things go wrong, and you can clean up the mess. Sounds familiar? Yes, we’ve all seen this, or will see this sooner or later.

 
With power comes great responsibility
Just imagine this with me. One of your colleagues asks you to look at a problem he’s having with a script someone in your company wrote. You probably solved it while he was standing right next to you. He watches you solve the problem, and when it’s solved, he walks away with a thousand-yard stare in his eyes. You don’t really think about it when it happens, but it’ll come to you…

A few weeks later, it’s 10 AM and you’re still having your first coffee of the day, the same developer asks you to look at “his script”. Wait, what?! Yes, he watched you work your magic, and that funny language of “Es-Que-El” seemed easy to learn. So he bought himself a “SQL Server for dummies”, learned all he needs to know in only a weekend, and wonders why it took you so long to learn it. From now on, he can write his own scripts, so he doesn’t need you anymore. Except for this last time.

Opening the script scares you: it’s a cursor. But in your frustration and amazement you “fix” the broken script, by refactoring his select statement in the cursor. Because the cursor only collects data, you add a “TOP 10″ clause in the select statement, and run the script as test. Nice, it finishes is 25 seconds. “It will only consume 500 rows” is the last thing you heard him say. You send the guy off, so you can continue your own work.

Later in the day, it’s about 4 PM, you meet the same guy at the coffee machine. He starts a discussion about how he needs a new PC, because the script YOU wrote is slow (see where this is going…?). It’s running for about 4 hours now, while it should only collect about 500 records. I know what you think: that’s impossible. You walk with him to his desk, stop the script, and look at his code. That isn’t the query you looked at this morning. Asking your colleague about it explains it all: he “slightly refactored” the script, because he didn’t need al those weird statements to get him his results. Well, after a fiery discussion of a few minutes, you explain him the DOES need the “FETCH NEXT” in the query, because the query now ran the same statement for only the first record in the select statement you declared for your cursor.

So this funny “Es-Que-El” language, isn’t that easy to learn. A beautiful quote about that, and I’m not sure who said that, says: “T-SQL is easy to learn, but hard to master”. So putting your money on one horse, in this case buying yourself a book, isn’t a good idea.

 
Putting your money on one color
Another great example is a company that had a wonderful Business Intelligence environment. They used the whole nine yards: SQL Server, SSIS, SSAS, SSRS, etc. The downside of that you ask? It was all hosted on 1 physical machine, on a single SQL Server instance. Oh, and it was running low on disk space, and there was no room in the chassis to put in extra disks. That’s right: it was like juggling burning chainsaws with only one hand. Or an interesting challenge, if you will.

Eventually we hosted a few databases on NAS volumes. At that point, I was told the databases we moved were less important. Pro tip: never EVER trust them when they say that!!! They forgot to tell me the biggest database of the moved databases wasn’t in the backup plan (500 GB database takes a long time to backup), and the last backup was made over a year ago. Surprise, one night the network card failed for maybe only a microsecond, and SQL Server thought the LUN was offline or the disk crashed. So SQL Server said that the database was corrupt, and that the datafiles were unavailable. After a few hours, a reboot of the server fixed it, and SQL Server could see the disk volumes again. So the database was saved after all.

But you see where I’m going with this? You never know when things go wrong, and putting all your money on one color when playing roulette isn’t the best idea. If the hardware of your single server fails, you fail.

 
Next, Next, Finish?
But the biggest example I can give you of a bad placed bet, are companies that work with SQL Server, but don’t hire a DBA. Have you ever worked for a company that work with Oracle? Every single company that works with Oracle, has a dedicated Oracle DBA. But have you ever wondered why that isn’t the case when a company works with SQL Server?

Thinking about it, I guess this is because a successful SQL Server installation is only a few “Next, Next, Finish”-mouse clicks away. So if the installation is so easy, every developer or person with IT experience can administer it probably. They couldn’t be more wrong. You know that, I know that, every SQL Server professional knows that, but try to convince other people of that fact.

So the worst bet you can place, and this is how I write myself back to the subject of this month, is not hiring a professional to manage your data and data stores. You wouldn’t let your local baker fix your car, because the wrote some books about cars, right? So why do you let a developer with basic knowledge near your SQL Server? Just because real DBA’s cost money? Yes, we do cost some serious money. But in the end, at least when you hire a GOOD DBA, they will make you money. You don’t think so? What does a DBA cost per hour? And how much money do you lose when your servers are down for just an hour?

Visualizing Spatial Data – Toggling map layers

Last time I showed you how to add a PushPin to your map control. But what if you want to toggle that (or any other) layer in your map control?

The reason I wanted to do this, is that I want to allow the users of my report to toggle different layers in the map control. That way, you can pre-build all the different layers you think your users need into a single report, and just let them toggle the layers themselves. That saves you a lot of time building a report for every single combination of layer-types.

Multi-layer map control
In order to keep it simple, I used the report from my previous post. Specifically, the version with the standard PushPins. And the only layer I want them to be able to toggle, is the PushPin layer on top of the basic map.

Creating a parameter
Because your want to present an easy solution to your users, I want to create a Boolean parameter that they can use to toggle the PushPin layer. To create this parameter, right-click on Parameters in Report Builder, and choose “Add Parameter”:

In the general options window, make sure you fill in a Parameter Name, a question for your users (so they understand what they need to do), and a Data Type (in this case a Boolean, that’s either True or False):

Because you don’t want to keep your users waiting if they open your report, you might want to give the Boolean a default value. You can do this by adding a default value to your Boolean:

By doing this, the Boolean is True by default. So the user doesn’t have to pick a value if he runs the report for the first time. This might come in handy if you need this as an extra feature in your report, and not a feature you need to configure before the report shows your data.

Expression
Now you need to configure your map layer to use the parameter you created to toggle it. Double-click on your map control, and once you see the Map Layers menu on the right, you can do this by right-clicking on “Layer Data”:

Now click on Visibility, and under Layer visibility, click the last option, and press the Fx button:

In the Expression window, fill in the following expression:

Now if you run the report, you’ll see the Boolean appear in the top bar of your report. By default, it’ll show the PushPins:

And if you change the Boolean to False and press “View Report”, you’ll see the may without PushPins:

With a little help from my friends…
This post isn’t just written by me, but also with the help of Valentino Vranken (Blog | @ValentinoV42. He helped me out on Twitter, because my expression didn’t work as expected. So Valentino: I really appreciated your help! Thank you! ;)

Reporting Services – Query database

After reading a tweet by Ray Martinez, I decided to share the scripts for Reporting Services that I often use.

One of the scripts I used a lot, is a script that shows the Reports that were successfully executed in 2012. For these Reports you will see the last execution time, rows and bytes returned, and how long it took to gather and show the results:

USE ReportServer

SELECT
	CL.Name						AS ReportName,
	CL.Description				AS ReportDescription,
	CL.Path						AS ReportPath,
	CL.CreationDate				AS ReportCreationDate,
	SUM(1)						AS TotalNumberOfTimesExecuted,
	MAX(EL.TimeStart)			AS LastTimeExecuted,
	AVG(EL.[RowCount])			AS AVG_NumberOfRows,
	AVG(EL.TimeDataRetrieval)	AS AVG_DataRetrievalTime,
	AVG(EL.TimeProcessing)		AS AVG_TimeProcessing,
	AVG(EL.TimeRendering)		AS AVG_TimeRendering
FROM ExecutionLog EL
JOIN Catalog CL
	ON CL.ItemID = EL.ReportID
WHERE 1 = 1
AND CL.Name IS NOT NULL
AND EL.Status ='rsSuccess'
GROUP BY
	CL.Name,
	CL.Path,
	CL.CreationDate,
	CL.Description
HAVING YEAR(MAX(EL.TimeStart)) = 2012
ORDER BY COUNT(EL.ReportID) DESC

The second script I want to share with you, shows the first 1000 successfully executed Reports. The data that is returned includes Report format, parameters used to query the data, information about the returned resultset and time needed to return and render the data and Report, etc. I uses a top 1000 because our Report server returned a lot of rows, where I only needed a few for my analysis.

USE ReportServer

SELECT TOP 1000
	EL.InstanceName				AS SQLInstanceName,
	EL.UserName					AS ExecuterUserName,
	EL.Format					AS ReportFormat,
	EL.Parameters				AS ReportParameters,
	EL.TimeStart				AS TimeStarted,
	EL.TimeEnd					AS TimeEnded,
	EL.TimeDataRetrieval		AS TimeDataRetrieval,
	EL.TimeProcessing			AS TimeProcessing,
	EL.TimeRendering			AS TimeRendering,
	EL2.Source					AS Source,
	EL.ByteCount				AS ReportInBytes,
	EL.[RowCount]				AS ReportRows,
	CL.Name						AS ReportName,
	CL.Path						AS ReportPath,
	CL.Hidden					AS ReportHidden,
	CL.CreationDate				AS CreationDate,
	CL.ModifiedDate				AS ModifiedDate,
	EL2.Format					AS RenderingFormat,
	EL2.ReportAction			AS ReportAction,
	EL2.Status					AS ExectionResult,
	DS.Name						AS DataSourceName,
	DS.Extension				AS DataSourceExtension
FROM ExecutionLog EL
JOIN Catalog CL
	ON CL.ItemID = EL.ReportID
LEFT JOIN ExecutionLog2 EL2
	ON EL2.ReportPath = CL.Path
JOIN DataSource DS
	ON DS.ItemID = CL.ItemID
WHERE 1 = 1
AND EL.Status = 'rsSuccess'
ORDER BY EL.TimeStart DESC

The next script returns an overview of the folder and Reports on your Report server. For every object on your server, you can see the creation- and modify date. The joined Executionlog table is used to get the number of times the Report was executed, by which user, and how much data was returned in which time:

USE ReportServer

SELECT
	CASE CL.Type
		WHEN 1 THEN 'Folder'
		WHEN 2 THEN 'Report'
		WHEN 3 THEN 'Resource'
		WHEN 4 THEN 'Linked Report'
		WHEN 5 THEN 'Data Source'
	END									AS ObjectType,
	CP.Name								AS ParentName,
	CL.Name								AS Name,
	CL.Path								AS Path,
	CU.UserName							AS CreatedBy,
	CL.CreationDate						AS CreationDate,
	UM.UserName							AS ModifiedBy,
	CL.ModifiedDate						AS ModifiedDate,
	CE.CountStart						AS TotalExecutions,
	EL.InstanceName						AS LastExecutedInstanceName,
	EL.UserName							AS LastExecuter,
	EL.Format							AS LastFormat,
	EL.TimeStart						AS LastTimeStarted,
	EL.TimeEnd							AS LastTimeEnded,
	EL.TimeDataRetrieval				AS LastTimeDataRetrieval,
	EL.TimeProcessing					AS LastTimeProcessing,
	EL.TimeRendering					AS LastTimeRendering,
	EL.Status							AS LastResult,
	EL.ByteCount						AS LastByteCount,
	EL.[RowCount]						AS LastRowCount,
	SO.UserName							AS SubscriptionOwner,
	SU.UserName							AS SubscriptionModifiedBy,
	SS.ModifiedDate						AS SubscriptionModifiedDate,
	SS.Description						AS SubscriptionDescription,
	SS.LastStatus						AS SubscriptionLastResult,
	SS.LastRunTime						AS SubscriptionLastRunTime
FROM Catalog CL
JOIN Catalog CP
	ON CP.ItemID = CL.ParentID
JOIN Users CU
	ON CU.UserID = CL.CreatedByID
JOIN Users UM
	ON UM.UserID = CL.ModifiedByID
LEFT JOIN ( SELECT
				ReportID,
				MAX(TimeStart) LastTimeStart
			FROM ExecutionLog
			GROUP BY ReportID) LE
	ON LE.ReportID = CL.ItemID
LEFT JOIN ( SELECT
				ReportID,
				COUNT(TimeStart) CountStart
			FROM ExecutionLog
			GROUP BY ReportID) CE
	ON CE.ReportID = CL.ItemID
LEFT JOIN ExecutionLog EL
	ON EL.ReportID = LE.ReportID
	AND EL.TimeStart = LE.LastTimeStart
LEFT JOIN Subscriptions SS
	ON SS.Report_OID = CL.ItemID
LEFT JOIN Users SO
	ON SO.UserID = SS.OwnerID
LEFT JOIN Users SU
	ON SU.UserID = SS.ModifiedByID
WHERE 1 = 1
ORDER BY CP.Name, CL.Name ASC

The last query is a short one. This returns the Reports that don’t inherit permissions, that are set in the site settings menu.

USE ReportServer


SELECT
	Path,
	Name
FROM Catalog
WHERE PolicyRoot = 1 

Hopefully there will be more where this came from. So follow my blog, or contact me by email or Twitter!

Deploying Report Builder 3.0 reports on 2005 Report Server

How fun is it to use new software? As a developer you always want to use the latest software, and in many cases it’s backwards compatible. This (unfortunately) isn’t the case with Report Builder 3.0 and SQL Server 2005 Reporting Services. This is a “by design” issue, but it’s such an annoying “feature”. If you try to deploy a Report built in RB 3.0, you will get the error:
 

“The report definition is not valid. Details: The report definition has an invalid target namespace ‘http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition’ which cannot be upgraded. (rsInvalidReportDefinition)”

In most cases you will copy-paste the tables or objects from RB 3.0 to the Business Intelligence Development Studio (BIDS), and save it again. But sometimes you just want to take the shortcut, and that is possible as well. As always, I have to mention that I don’t guarantee anything, but it worked for me several times.
 
 
The first step is to open the .RDL file with a text editor, and replace the following line of XML:

 
<Report xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner" 
xmlns="http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition">

with:

 
<Report xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner"
xmlns="http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition">

 
 
Step two is removing a set of nodes in the RDL, that is not supported by versions older then Report Server 2008:

<ReportSection>     <ReportSections>

And it’s closing tags:

    </ReportSection>
</ReportSections>

Don’t remove the text inside these tags, only remove the tags themselves!!!
 
 
If you follow these steps, you will be able to deploy a “new” Report to an “old” Report Server without weird copy-paste actions.

Parameters in Report URL

A colleague asked me if it’s possible to set the parameters of a Report in the URL, so that he could send it to an end-user. I didn’t know the answer, and after some research I came to the conclusion that it’s possible.

Just like your used to, you pass parameters into the Report by modifying the URL. An example of a normal Report URL looks like this:

http://<ServerName>/Reports

 
In order to use parameters in the URL, you need to use the Service URL. The Service URL has the possibility to add specific options into the URL, so that SSRS knows what to execute. The changed URL looks like this:

See Reports directory:

http://<ServerName>/ReportServer

List Reports in SSRS directory:

http://<ServerName>/ReportServer?%2fArea+51&rs:Command=ListChildren

 
As you can see, the SSRS contains a folder called “Area 51”, and a subfolder “Finance”. This folder contains the Report “CustomerReport”, which we want to use to pass parameters.

Parameters in an URL are added as followed:

Just a CustomerID:

http://<ServerName>/ReportServer/Pages/ReportViewer.aspx?%2fArea+51%2fFinance%2fCustomerReport&CustomerID=1&rs:Command=Render

And CustomerID and PaymentDate:

http://<ServerName>/ReportServer/Pages/ReportViewer.aspx?%2fArea+51%2fFinance%2fCustomerReport&CustomerID=1&PaymentDate=2011-09-23&rs:Command=Render

 
The “rs:” parameter tells the SSRS what to do with the URL. “ListChildren” will create an open-dir look and feel, and “Command=Render” will actually execute and render the Report.

There are a lot more properties that you can use to modify the URL, and give the end-user a Report with specific data, without storing default values in the Report itself.

Sources: MSDN

Follow

Get every new post delivered to your Inbox.

Join 34 other followers