T-SQL Tuesday #59 – My Hero

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “My Hero”. If you want to read the opening post, please click the image below to go to the party-starter: Tracy McKibben (Blog | @RealSQLGuy).



 
Everybody needs a hero. Ever kid needs a champion. Someone you look up to. It doesn’t matter if you’re young or old, everybody needs that. If you don’t have a hero to look up to, it’s hard to dream big.

 
The unsung heroes
Not all heroes are out there like we expect them to be. There are some heroes that like to stay below the radar. But does that make them any less of a hero than a hero that is really out there? I don’t think so. But defining a hero is much easier when it’s a person that is well known by the community.

If you take one of the most known people out there, for example Thomas LaRock (Blog | @SQLRockstar), Brent Ozar (Blog | @BrentO), or Grant Fritchey (Blog | @GFritchey). Do you define those people as heroes? I know I do. They make an effort for the community, help people wherever they can, and they are working to make us all better SQL Server Professionals. But they are out there, and are widely acknowledged as heroes by many people. So defining those people as heroes, even though they are, is easy. It’s praising the unsung heroes that is hard.

 

 
The fallen heroes
There’s a well known saying that says: “All that glitters is not gold”. And that also goes for the heroes in the community. In the end, they are just people like you and me. They have good and bad character traits like everyone else. But some heroes turn villain in the end. They use their fame to get what they want, or mistreat other people for not being the hero they are. Unfortunately I know a few of those “heroes” as well…

But does mistreating people make that hero a little less of a hero? I think it does. Even heroes can fall of their pedestal. And even though people find you a hero, you can’t mistreat other people, or tell them they’re less than you. If you’re the hero, it’s your mission to lead other people to victory. Not to put them down, and convince them they can’t do something.

 

 
Become a hero
A hero can come in many shapes and sizes. And you don’t become a hero overnight (even though that would make things easier sometimes). It takes a lot of effort to become a hero, and to be seen as the hero. And you know what? They make it look so easy, and that’s their strength and weakness at the same time.

So when are you becoming a hero? What are your plans to become a hero? What steps do you need to take now, in a week, in a month, or even in a year? You can’t become a hero you say? Yes you can! Everybody can become a hero! Just try it, by giving back to the community. Answer questions on forums, write blogs, start presenting, anything is possible!

Are you the next hero Gotham needs…?

Don’t make a hassle…

Last week, when I posted my previous blog post, I noticed I posted 99 blogs until that moment. At that moment I thought about what to write for blog post #100. After some thought, I decided not to write anything. I didn’t want to make a hassle about it.

 
But when I had dinner this evening with a close friend from the SQL Family (yes, I’m writing this a few minutes before actually posting it), it hit me. I did need to write something. After dinner, we had a wonderful conversation about all kinds of stuff. But it sunk in during the ride home. Sometimes you just need to stand behind your principles, let other people know what you find important, and most of all: let people know you appreciate them. So that’s what I’m trying to do with this blog post.

 
I started blogging in September 2011, just as a brain dump for myself. But after a while, I started to notice people were reading my blogs, and I could help them with my posts. This lead to writing bigger blog posts, that were more technical, and not only usable in my specific cases. But blogging was only a gateway to other cool things. Applying for a job for example. Before I started blogging, it was about introducing yourself, and checking if you could fit the company and job. But after a year of blogging, I didn’t need to introduce myself anymore. People just knew I blogged, they read my posts, and they started to get to know me because of that.

 
Meeting new friends was also a side-effect in the last 2 years. Before that period, I’ve never heard about SQL Family, the community, and the awesome effort people put into that. But now, new doors are opening, and it seems that opportunities are presenting itself sometimes. The people I met, and I call my close friends now, supported me when I need it, helped me where they could, and kicked me in the butt when the situation asked for it.

 
For the upcoming years, I have a lot of plans, new ideas, and big dreams. And with the friends I’ve made, I think that’ll become reality. So thank you all out there: readers, authors, contributors, mentors, role models, heroes, and most of all: thank you my friends!

Using FOR XML in a CROSS APPLY

A few blog posts ago, I showed how easy it is to generate JSON, JavaScript or arrays with the FOR XML clause. But in the examples, the output is transformed to text. Selecting this output is the same as a normal select. But what if you really want to output XML? Does this work the same? Let’s create the same example table again, and rebuild it to output XML.

 
Creating resources
Just as in my last blog post, let’s use the airline example:

CREATE TABLE #TMP_AirlineTickets
  (ID INT IDENTITY(1,1),
   Airline VARCHAR(10),
   Departure_Airport_Code VARCHAR(5),
   Price DECIMAL(10,4))


INSERT INTO #TMP_AirlineTickets
  (Airline, Departure_Airport_Code, Price)
VALUES
  ('BA', 'RTM', 10.00),
  ('KLM', 'AMS', 125.00),
  ('BA', 'LHR', 15.00),
  ('KLM', 'BCN', 50.00),
  ('KLM', 'BHX', 75.00)

 
When you write a query with a CROSS APPLY on this table, it works like it’s supposed to:

SELECT AT.Airline, CA.Departure_Airport_Code
FROM #TMP_AirlineTickets AT
CROSS APPLY
	(
		SELECT Departure_Airport_Code
		FROM #TMP_AirlineTickets A
		AND A.Airline = AT.Airline
	) CA
GROUP BY AT.Airline, CA.Departure_Airport_Code

 
There is a DISTINCT or GROUP BY needed in this query, to return only the unique records. As we’ll see later on, DISTINCT doesn’t work for this query, so a GROUP BY is used. This results in a dataset where every Airline code is joined to every airport they fly to:

 
Generating XML in CROSS APPLY
But in the previous post, we got a string as output. But now, we want to return XML instead of a normal resultset. But what happens if we put the FOR XML clause in this query? If we do so, it throws an exception:

SELECT AT.Airline, CA.Departure_Airport_Code
FROM #TMP_AirlineTickets AT
CROSS APPLY
	(
		SELECT Departure_Airport_Code
		FROM #TMP_AirlineTickets A
		AND A.Airline = AT.Airline
		FOR XML PATH('')
	) CA
GROUP BY AT.Airline, CA.Departure_Airport_Code

 

 
Apparently column 1 of the CROSS APPLY result has no column name. Looking at the result that comes from the CROSS APPLY, the column name is generated automatically:

 
Giving a column alias to the column in the select doesn’t work. But you can also provide a column list to a cross apply:

SELECT AT.Airline, CONVERT(XML, CA.Airport_XML) AS Airport_XML
FROM #TMP_AirlineTickets AT
CROSS APPLY
    (
        SELECT Departure_Airport_Code AS DAC
        FROM #TMP_AirlineTickets A
        AND A.Airline = AT.Airline
        FOR XML PATH('')
    ) CA (Airport_XML)
GROUP BY AT.Airline, CA.Airport_XML

 
By adding a column list to the CROSS APPLY, we can now use the column name Airport_XML in the outer select. By doing that, we get the airline name, and the departure airport code list:

 
Returning XML
So when you want to generate XML in a query, it’s possible to do that with a CROSS APPLY. It’s just a easy as writing a normal query, but it makes you a lot more flexible when you need to output XML from SQL Server. With this, there is no need for CLR’s or external applications to generate the XML you need, but you can just do it with T-SQL.

Using OUTPUT clause to track changes

Whenever something changes in a database, you would like to know what has changed. This change might brake your database or application later, so all schema or server changes are monitored. But how about data changes? Do you monitor those? You don’t have to feel bad if you don’t, because most people don’t track those changes. But you can track these changes in a very easy way, by using the output clause in your query. Let’s take a look at how that works.

 
Create resources
As example, let’s take a look at a bank account. If you have a bank account, there’s some money in it, and you spend money which will be subtracted from your account balance. First of all, let’s make two tables: one for the account balance, and one for logging the changes:

CREATE TABLE #Account
  (ID INT IDENTITY(1,1),
   Balance FLOAT)


CREATE TABLE #Logging
  (DEL_ID INT,
   DEL_Balance FLOAT,
   UPD_ID INT,
   UPD_Balance FLOAT,
   Balance_Difference FLOAT)

 
To keep it simple, the account table has just one column, and that column contains the current balance of the account. Let’s insert a balance to start with:

INSERT INTO #Account
  (Balance)
VALUES
  (250.25)

 
If we run an update query now, the account balance will change. But do we still know what the original balance was? Not without logging. So let’s see how the output clause can change this:

UPDATE #Account
SET Balance = 125.00
OUTPUT Deleted.ID,
       Deleted.Balance,
       Inserted.ID,
       Inserted.Balance,
	   Inserted.Balance - Deleted.Balance
INTO #Logging

 
The balance table now only contains the new value. But by using the output clause, that logged the changes we made, we can check the logging table, and find the old value. When looking at the output clause, you can use the temporary objects called Deleted and Inserted. The Deleted table contains the old values (before the change), and the Inserted contains the new values (the values after the change). When checking the logging table, you’ll see both the old and the new values:

 
So as you can see, the logging table contains both the old and the new version of the record. And this not only works for update queries, but also for Inserts, Deletes and Merge-statements. This gives you the opportunity to log changes in the queries you think need it. Just think about what you want to log, create a log table, and add an output clause to your query. It’s not always the case, but with this, it’s as easy as that.

 
If you want to read more about CRUD, don’t forget to check out these blog posts:

- Julie Koesmarno: On sabbatical
Mickey Stuewe: Going Beyond The INSERT Statement
Chris Yates: C-R-U-D The Basics

Creating your own spatial data with Windows Phone

In the other parts of the Spatial Data Series, I showed you how you can import, query and use spatial data. But how do you generate your own spatial data?

 
Apps
The last few months I tried to generate my own spatial data in a number of ways. Some trials were more successful than others. But when I found the right app, it was a great step in the right direction. At this moment, I use the Run The Map app. That app allowed me to record routes and, most important, was able to export the routes that were recorded. So how can you do that?

 
Start by running the app (in this case, from the start screen of my Windows Phone):

 
When the app is started, press the big start button to start recording your route:

 
When you start the recording, the start button will change into a stop button. If you’re done recording, just press the stop button.

Now that you’ve recorded your route, you want to export it. To export the recording, open the menu at the bottom of the screen, and press History:

 
In the history screen, press the synchronize button in the bottom menu:

 
This will allow you to export the “XML” file to your OneDrive. You’ll see a login window, and once you’ve entered your information, you’re good to go. The sync might take a while, depending on the size of your recorded route, and the speed of your wireless connection. The files will be placed in a folder called “RunTheMap”, that can be found in the root of your OneDrive.

 
Processing the XML
So now that we have the export file, we need to retrieve the data from that file. If you export the data to OneDrive, you’ll get 3 types of files:

- .DAT file, which you don’t need to use
– .KML file, which you can use to import the data into SQL Server
– .GPX file, which we will use to import the data into SQL Server

 
Let’s assume your .GPX file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" version="1.1">
 <metadata>
  <name>2014-06-04</name>
 </metadata>
 <trk>
  <name>S1</name>
  <desc></desc>
		<trkseg>
			<trkpt lat="51.916667" lon="4.500000">
				<ele>-7</ele>
				<time>2014-09-01T08:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
			<trkpt lat="50.851368" lon="5.690973">
				<ele>-7</ele>
				<time>2014-09-01T16:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
		</trkseg>
 </trk>
</gpx>

 
This sample file basically consist of 2 points: the center of Rotterdam and the center of Maastricht. Both cities are in the Netherlands, and are just used to illustrate the output. The content of the file can be copy-pasted into the script below (just replace the sample string with your own XML content from the .GPX file):

DECLARE @XML TABLE
  (XML_COLUMN XML)


INSERT INTO @XML
SELECT
'<?xml version="1.0" encoding="utf-8"?>
<gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" version="1.1">
 <metadata>
  <name>2014-06-04</name>
 </metadata>
 <trk>
  <name>S1</name>
  <desc></desc>
		<trkseg>
			<trkpt lat="51.916667" lon="4.500000">
				<ele>-7</ele>
				<time>2014-09-01T08:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
			<trkpt lat="50.851368" lon="5.690973">
				<ele>-7</ele>
				<time>2014-09-01T16:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
		</trkseg>
 </trk>
</gpx>'


;WITH X_CTE AS
(
  SELECT
    T1.Name.query('.') AS Name,
    T2.X_Content.query('.') AS X_Content
  FROM   @XML AS X
  CROSS APPLY XML_Column.nodes('/gpx/trk') AS T1(Name)
  CROSS APPLY XML_Column.nodes('/gpx/trk/trkseg/trkpt') AS T2(X_Content)
),
XML_Data AS
(
SELECT
  Name.value('(/trk/name)[1]', 'VARCHAR(20)') AS NAME,
  X_Content.value('(/trkpt/@lat)[1]', 'VARCHAR(20)') AS LAT,
  X_Content.value('(/trkpt/@lon)[1]', 'VARCHAR(20)') AS LON,
  X_Content.value('(/trkpt/ele)[1]', 'VARCHAR(20)') AS ELE,
  X_Content.value('(/trkpt/time)[1]', 'VARCHAR(20)') AS TIME,
  X_Content.value('(/trkpt/hdop)[1]', 'VARCHAR(20)') AS HDOP,
  X_Content.value('(/trkpt/vdop)[1]', 'VARCHAR(20)') AS VDOP,
  X_Content.value('(/trkpt/pdop)[1]', 'VARCHAR(20)') AS PDOP
FROM X_CTE
)


SELECT
	NAME,
	LAT,
	LON,
	ELE,
	LEFT(TIME, CHARINDEX('T', TIME, 0) - 1) AS DATE,
	LEFT(RIGHT(TIME, 9), LEN(RIGHT(TIME, 9)) - 1) AS TIME,
	HDOP,
	VDOP,
	PDOP
--INTO SpatialData.dbo.RunTheMapData
FROM XML_Data
WHERE 1 = 1

 
This big query will process the data, and will output a clean representation of the file contents:

 
With this data (for example, if you insert it into a table by uncommenting the INSERT line in the example script), you can create a line layover on a map (a manual about downloading and importing this data can be found here:

 
For clarification, I’ve drawn a circle around the start- and endpoint of Rotterdam and Maastricht. I’ve let SQL Server draw a line between the 2 points, that represents the route that was recorded. But obviously the route that you recorded will be a lot more detailed.

 
Creating a route
To map your full route on a map, you need to combine all the coordinates in the table into one line object. To achieve this, I’ve written a small cursor script (yes, I know, and I’m really sorry! But this was the quickest way at the time to do this…), that combines all the coordinates into 1 line. You can download it here. If you don’t want to use that, check out this blog post from Rob Farley (Blog | @rob_farley), who shows you how it can be done in a clean and neat way.

 
The opportunities…
If that’s done, you can use it as an overlay for any map you’ll like. For example, you could map your traveling history with this. Or you could check out which route you drive when you drive to work in the morning, or the route home in the evening. Parcel companies and truck drivers could track the routes the drivers take with this. The opportunities are as great as your own imagination.

SSIS: Zipping files with WinRAR

One of the strengths of SSIS (SQL Server Integration Services), is that’s it’s really powerful with the components available. But it’s easy to use the “Execute Process Task” to execute external tools. One of these tools can be WinRAR.

 
Zip file task
By default, SSIS doesn’t have a zip file task. There are some 3rd party components, like the one Joost van Rossum (Blog | @ssisjoost) built. You can find his custom task here. But what if you don’t to use 3rd party components? You can create your own zip task, by using an “Execute Process Task”.

 
Variables and Parameters
Because I want to keep this as easy as possible, and I might want to reuse this later in another package, I started by creating 2 parameters and 1 local variable:

 
Variable
Function:
To generating a unique zipfile name per day, I decided to add the date to the zipfile name

Name:
ZipName

Value:
“ZIPPED_” +
(DT_WSTR, 50) YEAR((DT_DBTIMESTAMP) @[System::StartTime]) + “_” +
RIGHT(“00″ + (DT_WSTR, 50) MONTH((DT_DBTIMESTAMP) @[System::StartTime]), 2) + “_” +
RIGHT(“00″ + (DT_WSTR, 50) DAY((DT_DBTIMESTAMP) @[System::StartTime]), 2) +
“.zip”

 
This expression will evaluate to:

 
Parameter 1
Function:
Point the “Execute Process Task” to the folder of the files that need to be zipped

Name:
FilePath

Value:
C:\Temp

 
Parameter 2
Function:
Tell WinRAR which files to zip in which directory

Name:
FilesToZip

Value:
C:\\Temp\\*.txt

 
When you added these Variable and Parameters, you should have this Variable:

 
And this Parameters:

 
Once that is done, you can add an “Execute Process Task” to the Control Flow. When you open the component you just added, go to the executable textbox, and point it to the WinRAR executable on your disk”:

 
Now you need to add the arguments that tell WinRAR what files to zip, where to zip them, and which files need to be zipped. In the “Execute Process Task”, click on Expressions, and on the dotted button:

 
In the next window, select “Arguments” under Property, and press the dotted button again:

 
Now paste the string below in this window:

 
“a -df ” + @[$Package::FilePath] + @[User::ZipName] + ” ” + @[$Package::FilesToZip]

 
If you press the evaluate button, you’ll see this result:

 

 
The command line switches we use are:

a: Archive files (or x for Extract files)
-df: Delete files after archiving

 
So once you know how it’s done, this is pretty easy, and does what it has to: zip your files without any issues. One thing to remember, is that SSIS can’t throw the exceptions WinRAR throws. So I tested this before, and didn’t notice the directory didn’t exist. My SSIS package kept on failing, and I didn’t know why. But after trying the expression we put into the expression textbox of the “Execute Process Task” in a command prompt, I did see the actual error.

So even though it’s easy to use, it’s not always easy to debug when it fails. But as long as you can execute it via command line yourself, you’re just a few steps away from seeing the actual error.

 
If you want to read more about SSIS, don’t forget to check out these blog posts:

- Julie Koesmarno: On sabbatical
Mickey Stuewe: SQL Server data transferred to a sqlite database using ssis
Chris Yates: ISPACs – Got To Love Em

Follow

Get every new post delivered to your Inbox.

Join 53 other followers