Creating your own spatial data with Windows Phone

In the other parts of the Spatial Data Series, I showed you how you can import, query and use spatial data. But how do you generate your own spatial data?

 
Apps
The last few months I tried to generate my own spatial data in a number of ways. Some trials were more successful than others. But when I found the right app, it was a great step in the right direction. At this moment, I use the Run The Map app. That app allowed me to record routes and, most important, was able to export the routes that were recorded. So how can you do that?

 
Start by running the app (in this case, from the start screen of my Windows Phone):

 
When the app is started, press the big start button to start recording your route:

 
When you start the recording, the start button will change into a stop button. If you’re done recording, just press the stop button.

Now that you’ve recorded your route, you want to export it. To export the recording, open the menu at the bottom of the screen, and press History:

 
In the history screen, press the synchronize button in the bottom menu:

 
This will allow you to export the “XML” file to your OneDrive. You’ll see a login window, and once you’ve entered your information, you’re good to go. The sync might take a while, depending on the size of your recorded route, and the speed of your wireless connection. The files will be placed in a folder called “RunTheMap”, that can be found in the root of your OneDrive.

 
Processing the XML
So now that we have the export file, we need to retrieve the data from that file. If you export the data to OneDrive, you’ll get 3 types of files:

- .DAT file, which you don’t need to use
– .KML file, which you can use to import the data into SQL Server
– .GPX file, which we will use to import the data into SQL Server

 
Let’s assume your .GPX file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" version="1.1">
 <metadata>
  <name>2014-06-04</name>
 </metadata>
 <trk>
  <name>S1</name>
  <desc></desc>
		<trkseg>
			<trkpt lat="51.916667" lon="4.500000">
				<ele>-7</ele>
				<time>2014-09-01T08:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
			<trkpt lat="50.851368" lon="5.690973">
				<ele>-7</ele>
				<time>2014-09-01T16:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
		</trkseg>
 </trk>
</gpx>

 
This sample file basically consist of 2 points: the center of Rotterdam and the center of Maastricht. Both cities are in the Netherlands, and are just used to illustrate the output. The content of the file can be copy-pasted into the script below (just replace the sample string with your own XML content from the .GPX file):

DECLARE @XML TABLE
  (XML_COLUMN XML)


INSERT INTO @XML
SELECT
'<?xml version="1.0" encoding="utf-8"?>
<gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" version="1.1">
 <metadata>
  <name>2014-06-04</name>
 </metadata>
 <trk>
  <name>S1</name>
  <desc></desc>
		<trkseg>
			<trkpt lat="51.916667" lon="4.500000">
				<ele>-7</ele>
				<time>2014-09-01T08:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
			<trkpt lat="50.851368" lon="5.690973">
				<ele>-7</ele>
				<time>2014-09-01T16:00:00Z</time>
				<hdop>0.0</hdop>
				<vdop>0.0</vdop>
				<pdop>0.0</pdop>
			</trkpt>
		</trkseg>
 </trk>
</gpx>'


;WITH X_CTE AS
(
  SELECT
    T1.Name.query('.') AS Name,
    T2.X_Content.query('.') AS X_Content
  FROM   @XML AS X
  CROSS APPLY XML_Column.nodes('/gpx/trk') AS T1(Name)
  CROSS APPLY XML_Column.nodes('/gpx/trk/trkseg/trkpt') AS T2(X_Content)
),
XML_Data AS
(
SELECT
  Name.value('(/trk/name)[1]', 'VARCHAR(20)') AS NAME,
  X_Content.value('(/trkpt/@lat)[1]', 'VARCHAR(20)') AS LAT,
  X_Content.value('(/trkpt/@lon)[1]', 'VARCHAR(20)') AS LON,
  X_Content.value('(/trkpt/ele)[1]', 'VARCHAR(20)') AS ELE,
  X_Content.value('(/trkpt/time)[1]', 'VARCHAR(20)') AS TIME,
  X_Content.value('(/trkpt/hdop)[1]', 'VARCHAR(20)') AS HDOP,
  X_Content.value('(/trkpt/vdop)[1]', 'VARCHAR(20)') AS VDOP,
  X_Content.value('(/trkpt/pdop)[1]', 'VARCHAR(20)') AS PDOP
FROM X_CTE
)


SELECT
	NAME,
	LAT,
	LON,
	ELE,
	LEFT(TIME, CHARINDEX('T', TIME, 0) - 1) AS DATE,
	LEFT(RIGHT(TIME, 9), LEN(RIGHT(TIME, 9)) - 1) AS TIME,
	HDOP,
	VDOP,
	PDOP
--INTO SpatialData.dbo.RunTheMapData
FROM XML_Data
WHERE 1 = 1

 
This big query will process the data, and will output a clean representation of the file contents:

 
With this data (for example, if you insert it into a table by uncommenting the INSERT line in the example script), you can create a line layover on a map (a manual about downloading and importing this data can be found here:

 
For clarification, I’ve drawn a circle around the start- and endpoint of Rotterdam and Maastricht. I’ve let SQL Server draw a line between the 2 points, that represents the route that was recorded. But obviously the route that you recorded will be a lot more detailed.

 
Creating a route
To map your full route on a map, you need to combine all the coordinates in the table into one line object. To achieve this, I’ve written a small cursor script (yes, I know, and I’m really sorry! But this was the quickest way at the time to do this…), that combines all the coordinates into 1 line. You can download it here. If you don’t want to use that, check out this blog post from Rob Farley (Blog | @rob_farley), who shows you how it can be done in a clean and neat way.

 
The opportunities…
If that’s done, you can use it as an overlay for any map you’ll like. For example, you could map your traveling history with this. Or you could check out which route you drive when you drive to work in the morning, or the route home in the evening. Parcel companies and truck drivers could track the routes the drivers take with this. The opportunities are as great as your own imagination.

SSIS: Zipping files with WinRAR

One of the strengths of SSIS (SQL Server Integration Services), is that’s it’s really powerful with the components available. But it’s easy to use the “Execute Process Task” to execute external tools. One of these tools can be WinRAR.

 
Zip file task
By default, SSIS doesn’t have a zip file task. There are some 3rd party components, like the one Joost van Rossum (Blog | @ssisjoost) built. You can find his custom task here. But what if you don’t to use 3rd party components? You can create your own zip task, by using an “Execute Process Task”.

 
Variables and Parameters
Because I want to keep this as easy as possible, and I might want to reuse this later in another package, I started by creating 2 parameters and 1 local variable:

 
Variable
Function:
To generating a unique zipfile name per day, I decided to add the date to the zipfile name

Name:
ZipName

Value:
“ZIPPED_” +
(DT_WSTR, 50) YEAR((DT_DBTIMESTAMP) @[System::StartTime]) + “_” +
RIGHT(“00″ + (DT_WSTR, 50) MONTH((DT_DBTIMESTAMP) @[System::StartTime]), 2) + “_” +
RIGHT(“00″ + (DT_WSTR, 50) DAY((DT_DBTIMESTAMP) @[System::StartTime]), 2) +
“.zip”

 
This expression will evaluate to:

 
Parameter 1
Function:
Point the “Execute Process Task” to the folder of the files that need to be zipped

Name:
FilePath

Value:
C:\Temp

 
Parameter 2
Function:
Tell WinRAR which files to zip in which directory

Name:
FilesToZip

Value:
C:\\Temp\\*.txt

 
When you added these Variable and Parameters, you should have this Variable:

 
And this Parameters:

 
Once that is done, you can add an “Execute Process Task” to the Control Flow. When you open the component you just added, go to the executable textbox, and point it to the WinRAR executable on your disk”:

 
Now you need to add the arguments that tell WinRAR what files to zip, where to zip them, and which files need to be zipped. In the “Execute Process Task”, click on Expressions, and on the dotted button:

 
In the next window, select “Arguments” under Property, and press the dotted button again:

 
Now paste the string below in this window:

 
“a -df ” + @[$Package::FilePath] + @[User::ZipName] + ” ” + @[$Package::FilesToZip]

 
If you press the evaluate button, you’ll see this result:

 

 
The command line switches we use are:

a: Archive files (or x for Extract files)
-df: Delete files after archiving

 
So once you know how it’s done, this is pretty easy, and does what it has to: zip your files without any issues. One thing to remember, is that SSIS can’t throw the exceptions WinRAR throws. So I tested this before, and didn’t notice the directory didn’t exist. My SSIS package kept on failing, and I didn’t know why. But after trying the expression we put into the expression textbox of the “Execute Process Task” in a command prompt, I did see the actual error.

So even though it’s easy to use, it’s not always easy to debug when it fails. But as long as you can execute it via command line yourself, you’re just a few steps away from seeing the actual error.

 
If you want to read more about SSIS, don’t forget to check out these blog posts:

- Julie Koesmarno: On sabbatical
Mickey Stuewe: SQL Server data transferred to a sqlite database using ssis
Chris Yates: ISPACs – Got To Love Em

T-SQL Tuesday #57 – SQL Family and community – RECAP

Last week I had the privilege to host this months T-SQL Tuesday blog party. The subject was SQL Family and community, and a lot of people participated. There were a lot of great blog posts, and I wanted to make a complete list for you that contains all the blog posts of this month:

 
Warwick Rudd (Blog | @Warwick_Rudd) posted: SQL Family and community

Boris Hristov (Blog | @BorisHristov) posted: SQL Family and SQLHangouts

Cathrine Wilhelmsen (Blog | @cathrinew) posted: #SQLFamily – Pay It Forward

Mickey Stuewe (Blog | @SQLMickey) posted: SQL Family to the Rescue of a Local Community

Chris Yates (Blog | @YatesSQL) posted: SQL Family and Community

Ed Watson (Blog | @SQLGator) posted: SQL Family and Tripping the Light Fantastique

Aaron Bertrand (Blog | @AaronBertrand) posted: A SQL _VARIANT Use Case (No Pun Intended)

Jason Brimhall (Blog | @sqlrnnr) posted: SQL Family and
Community

Adam Mikolaj (Blog | @SqlSandwiches) posted: SQL Family and community

Glenda Gable (Blog | @ggable313) posted: TSQL Tuesdays # 57

Andy Yun (Blog | @SQLBek) posted: A #SQLFamily Story

Wayne Sheffield (Blog | @DBAWayne) posted: SQL Family and Community

Steve Jones (Blog | @way0utwest) posted: SQL Family and Community

Kenneth Fisher (Blog | @sqlstudent144) posted: SQL Family

Rob Farley (Blog | @rob_farley) posted: Nepotism In The SQL Family

Ricardo Leka (Blog | @BigLeka) posted: SQL Family and community

Jeffrey Verheul (Blog | @DevJef) posted: SQL Family and community

 
I want to thank all participants for their beautiful posts for T-SQL Tuesday. And thank you Adam, for allowing me to host this month!

If I missed your post, please let me know, and I’ll add it to this post.

T-SQL Tuesday #57 – SQL Family and community

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “SQLFamily and community”. If you want to read the opening post, please click the image below to go to the party-starter: Jeffrey Verheul (Blog | @DevJef).



 
This months topic is SQL Family and community. The reason I chose this topic is the great SQL Server community. As I mentioned in my opening post, at first I didn’t know what to expect from the community, when I first started working with SQL Server. But it didn’t take long to notice the reason why everyone was to enthusiastic about it. They say the SQL Server community is friendlier and more helpful than all other communities. But it is?

It’s all about helping
A few years back, I attended a developer conference in the Netherlands called TechDays. One of the session I got to see was a session about spatial data by Bob Beauchemin (Blog | @bobbeauch). And yes, here is where the obsession for spatial data started I think. After the conference I started to play around with spatial data, and eventually I ran into a problem. At that moment I couldn’t find any useful resources online, and I knew Bob would know the answer.

After thinking about it a day or 2, I finally decided to mail Bob. This was a big step for me, because I was about to email one of the people I highly respect and look up to. The expectancy wasn’t too high, because he couldn’t possibly have the time to answer my stupid email. So I took the bull by the horns anyway and pressed send.. But to my surprise, it didn’t even take an hour before I received an answer. From Bob. THE Bob. I couldn’t believe it! The big Bob Beauchemin took the time to not only read my email, but even provide me with an answer to my question. And of course the answer was spot on, and precisely I needed to overcome my problem.

After that, I regularly emailed Bob about questions, interesting cases I came across, and many more things. But I didn’t send him an email for too long now, and I still hope I get to thank him for this in person one day. But this is where the community-vibe hit me.

 
From knowledge base to blog
When you get to work on all different kinds of projects, you end up with notepad files all over your desktop, USB hard drives, etc. At least, in my case that’s where it started. In order to create a structure in all these small solutions and notes, I decided to put them online. So basically I wanted to create a small knowledge base and one location where I could save scripts for myself, that I wrote before and might need in the future again. But after a few months, people started to thank me for my posts. I couldn’t understand why, because the posts were so basic, and sometimes totally gibberish in my opinion.

But after a while, I started to like the “blogging” (which was just brain-dumping until then). The blog posts became more coherent, written for a broader audience, and they were easier to follow and implement for readers. So the brain dumps grew into something new: useful information that other people could actually use! One of the things I’m most proud of until now is the spatial data series I wrote (yes, again spatial data). It grew from just 1 post (the basics of spatial data) to a series of 11 posts where I guide readers from the basics to building their own reports. This series was inspired by readers and colleagues that asked questions, and questions I had myself.

 
Online community
One of the greatest discoveries for me was the community on Twitter. Anyone you can imagine from the community is on Twitter, and available within a few clicks. And they all take the time to answer questions! This came as a big surprise to me at first. There are so much interesting people you can follow, interact with, and that can help you with technical challenges you have. In most cases, when you post a question with the hashtag #SQLHelp, you’ll get an answer within minutes. And you answer could come from anybody: An MCM from the UK, MVP from Australia, DBA from the US, a BI guy from South Africa, etc. All kinds of people with different skill sets and technical interests. And the variety of people make it worth while to follow and interact with them.

 
When strangers become friends
After being “active” in the community for a while, I started talking to some people of the SQL Server community online. At one moment I asked one of them to review a blog post, to see what she thought. That was the start of a friendship that brought me a lot the last couple of years. She introduced me to some other SQL Server professionals, that I consider close friends now. These friends support me, push me to do stuff when I need it, and help me in any way possible. They inspire me to do better, and take the extra step to achieve the goals I set myself. They are there when I need them, and that is the most important thing. And all of that because of SQL Family and the community! The community that I couldn’t believe to be more than a bunch of people with the same job and technical interests.

 
FORG
Being involved in the community is really cool, and there are benefits of that. For example, Red Gate gave me the opportunity to become a member of the Friends of Red Gate program, which was and is a great honor for me. I’ve loved using their tools for years, and now they gave me the opportunity to help make these tools better, by being part of this program. And hopefully there are a lot of cool things coming up, because I still have a lot of dreams and goals I set myself when I became a part of this program.

 
If you’re not involved yet, get involved!
The reason you should get involved in the community, is the fact that it’s really nice to help people, and you get so much in return. Getting help from someone is great, but helping other people makes you feel even better. At least, that’s my experience. So if you’re not already involved, start visiting local events like SQL Saturdays or online events like virtual users groups. It’ll give you insight in other SQL Server subjects, and show you that other people struggle with the same problems you do!

T-SQL Tuesday #57 – SQL Family and community

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

 
This month I’ll be the T-SQL Tuesday host, and I’m really honored! Thanks Adam, for this opportunity!

 
The topic
This month I would like to give everyone the opportunity to write about SQL Family. The first time I heard of SQL Family, was on Twitter where someone mentioned this. At first I didn’t know what to think about this. I wasn’t really active in the community, and I thought it was a little weird. They were just people you meet on the internet, and might meet in person at a conference some day. But I couldn’t be more wrong about that!

Once you start visiting events, forums, or any other involvement with the community, you’ll see I was totally wrong. I want to hear those stories. How do you feel about SQL Family? Did they help you, or did you help someone in the SQL Family? I would love to hear the stories of support, how it helped you grow and evolve, or how you would explain SQL Family to your friends and family (which I find hard). Just write about whatever topic you want, as long as it’s related to SQL Family or community.

 
The rules
– Your post must be published between 00:00:00 UTC and 23:59:59 UTC on Tuesday 12th of August
– Include the T-SQL Tuesday logo in the top of your post, and your post must link back to this one (trackback and comments are moderated, so it might take some time before they’re visible) or tweet about it using the hashtag #TSQL2sDay
– If you like this, check Steve Jones’ (Blog | @way0utwest) blog post that contains the list of topics, and contact Adam Machanic (Blog | @AdamMachanic) if you’d like to host

SQLBits: The new standard

Normally I don’t write blog posts about events I visit. But last week I attended my first SQLBits, and to be honest, I didn’t know what to expect. It’s just another conference I thought, with a lot of interesting sessions, nothing fancy. Well, I couldn’t be more wrong than that! SQLBits isn’t just a conference, it’s a cool event where you get to meet new people, see famous speakers talk, interact with them, and they organized an AWESOME party!

 
Location
SQLBits was held in Telford, UK, close to Birmingham. By train it took about an hour away from the airport in Birmingham to Telford. The conference center was a beautiful location, with hotels close by. So you didn’t have to travel in the morning to go to the conference center. You just needed to cross the parking lot to get to the location.

 
Pre-cons
On Thursday they organized pre-cons, which are full day instructor led training sessions. You need to pay for these pre-cons, but they’re definitely worth the money. You could choose a session from a list of 11, and I attended a pre-con led by Brent Ozar (Blog | @BrentO) about Virtualization, SANs, and Hardware for SQL Server.

Brent talked about how RPO and RTO are the starting points of any SQL Server architecture. He advises that the business needs to fill out a form about RPO (Recovery Point Objective) and RTO (Recovery Time Objective), so they start thinking about what they ask IT people. After that, he discussed backup strategies, HA (High Availability) and DR (Disaster Recovery) designs, SAN’s and SQL Server hardware. And even though I was familiar with some of the subjects he talked about, it was definitely a good way to look at certain things again from another perspective.

 
Sessions
Both Friday and Saturday were filled with good sessions . It started with a keynote from Nigel Ellis (Blog | @chillidemon) about Azure. It was a really interesting talk, and a good start of the day.

After that I saw some very interesting sessions about various subjects. The rooms were good, and all on 1 floor. So no huge groups that want to change floors, which I’ve seen at other conferences. This meant that rooms were easily accessible, and you could switch rooms between sessions very quick.

 
The party
On Friday night there was a party, organized by the SQLBits team. And while I’m writing this, I’m still impressed when I look back at the party. They had it all sorted out: great food, great people, great theme! Just perfect! The theme of the party was steampunk. They arranged for a carousel, huge slide (the Helter Skelter), and various carnival booths with games and entertainment. You can find pictures of the party on Twitter.

Looking back at the party, it was the best post-conference party I’ve ever seen so far. EVER!

 
Food and drinks
Just a small thing, but this really shows that the SQLBits crew really had it sorted out: the food could be found on several locations, and there were people that carried trays with food on it. They served the food in small, square bowls which were pretty handy to hold. Also, they served a number of different dishes, so you could pick whatever you liked.

The drinks were available throughout the conference center. So if you walked from one session to another, you could grab a coffee, tea, water or juice. It was really nice that they choose this setup, instead of 1 or 2 locations, which would’ve resulted in an endless line of people.

 
Feedback
If you attended SQLBits, one of the ways to let the organizers know how you think, is by filling out the feedback forms. Even if you have any negative feedback, please let them know. You can fill in the form in the links below:

If you attended any of the days at SQLBits please can you all fill out the following survey:
http://www.sqlbits.com/SQLBitsXII

If you attended the Thursday Training Day then please fill out the following survey:
http://www.sqlbits.com/SQLBitsXIIThursday

If you attended the Friday Deep Dives Day then please fill out the following survey:
http://www.sqlbits.com/SQLBitsXIIFriday

If you attended the Saturday Community Day then please fill out the following survey:
http://www.sqlbits.com/SQLBitsXIISaturday

You’ll help the organizers with your feedback, so they can make the next SQLBits even better (if that’s possible), and you get a chance to win a £100 Amazon voucher!

 
Thanks!
The last thing I wanted to do is give a HUGE compliment to the organizers of SQLBits, the volunteers and all attendees that made this an awesome event! I’m already looking forward to next year!

Follow

Get every new post delivered to your Inbox.

Join 50 other followers