Restoring or Moving a database with CDC enabled

When you have CDC enabled on your database, and you want to move it on another instance or restore it somewhere, you need to take some extra steps to make sure CDC is transferred correctly. When asking myself the question how this works, I decided to do some testing.

 
Create test resources
In order to test this, we need to create some test resources. I’ve created a new database, added a table and enabled CDC:

CREATE DATABASE [CDC_DB]
 ON PRIMARY
(NAME = N'CDC_DB', FILENAME = N'C:\Databases\#Backup\CDC_DB.mdf')
 LOG ON
(NAME = N'CDC_DB_log', FILENAME = N'C:\Databases\#Backup\CDC_DB_log.ldf')
GO

USE CDC_DB
GO

CREATE TABLE dbo.Customer
  (CustomerID INT CONSTRAINT PK_Customer PRIMARY KEY IDENTITY(1,1),
   FirstName VARCHAR(50),
   LastName VARCHAR(50))
GO

EXEC sys.sp_cdc_enable_db
GO

EXEC sys.sp_cdc_enable_table
  @source_schema = 'dbo',
  @source_name = 'Customer',
  @supports_net_changes = 1,
  @role_name = NULL,
  @index_name = 'PK_Customer'
GO


INSERT INTO dbo.Customer
  (FirstName, LastName)
VALUES
  ('John', 'Doe')
GO

Now that we have the resources, we can take a backup of the database:

BACKUP DATABASE CDC_DB TO DISK = 'C:\Databases\#Backup\CDC_DB.bak'

 
Restoring the backup
To show you the default restore doesn’t work, let’s restore the backup next to the original database:

RESTORE DATABASE CDC_DB_RESTORE FROM DISK='C:\Databases\#Backup\CDC_DB.bak'
WITH
   MOVE 'CDC_DB' TO 'C:\Databases\#Backup\CDC_DB.mdf',
   MOVE 'CDC_DB_log' TO 'C:\Databases\#Backup\CDC_DB_log.ldf'

If we now try to select data from the change table of CDC, we get an error message:

 
But CDC was in the old database, so why wasn’t it restored? If you look for a solution, Books Online / MSDN doesn’t say anything about this. But luckily other community members blogged about this before (see this and this article). There is an option you can add to your restore:

USE master
GO

DROP DATABASE CDC_DB_RESTORE
GO

RESTORE DATABASE CDC_DB_RESTORE FROM DISK='C:\Temp\#BackupTest\CDC_DB.bak'
WITH
   MOVE 'CDC_DB' TO 'C:\Temp\#BackupTest\CDC_DB.mdf',
   MOVE 'CDC_DB_log' TO 'C:\Temp\#BackupTest\CDC_DB_log.ldf'
, KEEP_CDC
GO

 
If you add “KEEP_CDC” to your backup statement, it will also restore CDC. But that’s not your only option.

 
Detach and Attach

Another option you have is to detach your database, copy the files to another location, and attach the database again. Let’s detach the database first:

USE master
GO
ALTER DATABASE [CDC_DB_RESTORE] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
EXEC master.dbo.sp_detach_db @dbname = N'CDC_DB_RESTORE'
GO

 
Now that’s done, we can copy the files to another directory if needed. You could also just attach it on the same instance:

USE master
GO
CREATE DATABASE [CDC_DB_RESTORE] ON 
( FILENAME = N'C:\Temp\#BackupTest\CDC_DB.mdf' ),
( FILENAME = N'C:\Temp\#BackupTest\CDC_DB_log.ldf' )
 FOR ATTACH
GO

 
This also allows you to use CDC like nothing happened. The database is still CDC enabled, the change tables are there, and the captured data is still in the change table. But there’s one thing missing when you move your CDC database to another instance: the capture and cleanup jobs.

 
CDC Jobs
So how do we recover the jobs? Script them from one instance, and create them on the other instance? There’s a better way to do that. You can just run the statement below to add the jobs to your instance:

USE CDC_DB_RESTORE
GO
EXEC sys.sp_cdc_add_job 'capture'
GO
EXEC sys.sp_cdc_add_job 'cleanup'
GO

 
And if you move your database, you can also clean your instance, and run the statement below to remove the leftover jobs:

USE CDC_DB_RESTORE
GO
EXEC sys.sp_cdc_drop_job @job_type = N'capture'
GO
EXEC sys.sp_cdc_drop_job @job_type = N'cleanup'
GO

 
It is possible, but…
Moving or restoring a database with CDC gives you a few challenges, and even though there are some issues, it’s not impossible. But before you start, you need to think about the approach you need to take, in order to move/restore a database without losing CDC. Depending on your environment and possibilities, you might want to use a specific option, or perhaps can’t use the option you would like. There is a way around, but I would recommend you to test it first, before you run this on production databases!

T-SQL Tuesday #51 – Place Your Bets

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Place Your Bets”. If you want to read the opening post, please click the image below to go to the party-starter: Jason Brimhall (Blog | @sqlrnnr).



 
When I read about this months T-SQL Tuesday topic, the first thing that came to mind was things that you know will go wrong sooner or later. When you encounter a situation like this, you immediately know this can’t last forever. You want to fix it when you see it, but there’s no money, or there’s no time at that moment. But they promise you, in a few weeks you can take all the time you need. Well, that’ll never happen. Until things go wrong, and you can clean up the mess. Sounds familiar? Yes, we’ve all seen this, or will see this sooner or later.

 
With power comes great responsibility
Just imagine this with me. One of your colleagues asks you to look at a problem he’s having with a script someone in your company wrote. You probably solved it while he was standing right next to you. He watches you solve the problem, and when it’s solved, he walks away with a thousand-yard stare in his eyes. You don’t really think about it when it happens, but it’ll come to you…

A few weeks later, it’s 10 AM and you’re still having your first coffee of the day, the same developer asks you to look at “his script”. Wait, what?! Yes, he watched you work your magic, and that funny language of “Es-Que-El” seemed easy to learn. So he bought himself a “SQL Server for dummies”, learned all he needs to know in only a weekend, and wonders why it took you so long to learn it. From now on, he can write his own scripts, so he doesn’t need you anymore. Except for this last time.

Opening the script scares you: it’s a cursor. But in your frustration and amazement you “fix” the broken script, by refactoring his select statement in the cursor. Because the cursor only collects data, you add a “TOP 10″ clause in the select statement, and run the script as test. Nice, it finishes is 25 seconds. “It will only consume 500 rows” is the last thing you heard him say. You send the guy off, so you can continue your own work.

Later in the day, it’s about 4 PM, you meet the same guy at the coffee machine. He starts a discussion about how he needs a new PC, because the script YOU wrote is slow (see where this is going…?). It’s running for about 4 hours now, while it should only collect about 500 records. I know what you think: that’s impossible. You walk with him to his desk, stop the script, and look at his code. That isn’t the query you looked at this morning. Asking your colleague about it explains it all: he “slightly refactored” the script, because he didn’t need al those weird statements to get him his results. Well, after a fiery discussion of a few minutes, you explain him the DOES need the “FETCH NEXT” in the query, because the query now ran the same statement for only the first record in the select statement you declared for your cursor.

So this funny “Es-Que-El” language, isn’t that easy to learn. A beautiful quote about that, and I’m not sure who said that, says: “T-SQL is easy to learn, but hard to master”. So putting your money on one horse, in this case buying yourself a book, isn’t a good idea.

 
Putting your money on one color
Another great example is a company that had a wonderful Business Intelligence environment. They used the whole nine yards: SQL Server, SSIS, SSAS, SSRS, etc. The downside of that you ask? It was all hosted on 1 physical machine, on a single SQL Server instance. Oh, and it was running low on disk space, and there was no room in the chassis to put in extra disks. That’s right: it was like juggling burning chainsaws with only one hand. Or an interesting challenge, if you will.

Eventually we hosted a few databases on NAS volumes. At that point, I was told the databases we moved were less important. Pro tip: never EVER trust them when they say that!!! They forgot to tell me the biggest database of the moved databases wasn’t in the backup plan (500 GB database takes a long time to backup), and the last backup was made over a year ago. Surprise, one night the network card failed for maybe only a microsecond, and SQL Server thought the LUN was offline or the disk crashed. So SQL Server said that the database was corrupt, and that the datafiles were unavailable. After a few hours, a reboot of the server fixed it, and SQL Server could see the disk volumes again. So the database was saved after all.

But you see where I’m going with this? You never know when things go wrong, and putting all your money on one color when playing roulette isn’t the best idea. If the hardware of your single server fails, you fail.

 
Next, Next, Finish?
But the biggest example I can give you of a bad placed bet, are companies that work with SQL Server, but don’t hire a DBA. Have you ever worked for a company that work with Oracle? Every single company that works with Oracle, has a dedicated Oracle DBA. But have you ever wondered why that isn’t the case when a company works with SQL Server?

Thinking about it, I guess this is because a successful SQL Server installation is only a few “Next, Next, Finish”-mouse clicks away. So if the installation is so easy, every developer or person with IT experience can administer it probably. They couldn’t be more wrong. You know that, I know that, every SQL Server professional knows that, but try to convince other people of that fact.

So the worst bet you can place, and this is how I write myself back to the subject of this month, is not hiring a professional to manage your data and data stores. You wouldn’t let your local baker fix your car, because the wrote some books about cars, right? So why do you let a developer with basic knowledge near your SQL Server? Just because real DBA’s cost money? Yes, we do cost some serious money. But in the end, at least when you hire a GOOD DBA, they will make you money. You don’t think so? What does a DBA cost per hour? And how much money do you lose when your servers are down for just an hour?

Error deleting checkpoint file

One of the many tasks of being a DBA is to monitor the SQL Server. This doesn’t mean you only keep an eye on daily performance, but you need to monitor the whole process. This means you need to monitor jobs, ad-hoc queries, maintenance plans, etc. But what if you come across an error, and nothing fails…?

 
Finding the error
A few weeks ago I worked on a software release on my SQL Server environment, and the BI developers needed an extra copy of a database for their tests. We didn’t want to test the new functionality on the production database, but the test needed production data. So I decided to restore a backup of the database under a different name.

After a restore of the database, the developers started working on the database. After a few hours, I encountered an error myself (not related to the backup restore), and decided to check the SQL Server error log. But what I saw there was not the error I was looking for:

 

Error: 3634, Severity: 16, State: 2.
The operating system returned the error ‘3(failed to retrieve text for this error. Reason: 15105)’ while attempting ‘DeleteFile’ on ‘E:\Backup\RestoreCheckpointDB70.CKP’.

The actual error message can be found, by running a command prompt, and starting “net helpmsg 3″ (where 3 is the error number from the message above). The actual error is:

 

The system cannot find the path specified.

This error occurred at the same time my restore finished. Strange… And looking at the location in the error, there was no folder called “Backup”. But then I remembered something…

The week before something went wrong with the backup process. We currently use the default maintenance plans in SQL Server, and something went wrong with the job. Because of a lack of diskspace on the machine, I cleaned up the drive were the backups were dropped. But strangely enough the drive contained 3 backup folders, of which 2 were completely empty. So without properly checking, I deleted 2 of the 3 folders. The only folder left, was the one that was referenced in the maintenance job.

 
Finding the source of the issue
But now, how to solve this issue. Apparently my cleanup didn’t help SQL Server, but it actually harmed it…

During my search, I found several helpful websites, that led me to this registry key:

 

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.[Instance Name]\MSSQLServer

In this registry key, you’ll find the default backup location that SQL Server uses to write (for example) your checkpoint file to. And guess what… You’re right! This was one of the empty folders I deleted.

 
Resolving the issue
Once I knew what caused the issue, I created a new folder with the same name on the location, and added the security permissions needed. But what if you don’t want to do that, or restoring a directory isn’t possible?

There are several other ways to solve an issue like this. For example (for you gutsy ones!) you could edit the registry. No, seriously, please don’t! :)

One of the ways to change this, is by right-clicking your instance name in SSMSS, open the server properties, and chose “Database Settings”:

Another way to change the default locations, is to right-click your instance name in SSMS, and chose “Facets”:

Conclusion
One of the things I’ve learned in the past, and and that I was reminded of is:

When you’re dealing with SQL Server, always check, double check, and triple check the changes you make!

So the same goes for the setting I’ve showed you above! Always check what you change, because some changes will be affected after the next service restart, or machine reboot. So if you change something, and you don’t see the effects immediately, that doesn’t mean it was a safe change!

Moving system database: rocket science or piece of cake?

Being a DBA often makes you the “Default Blame Acceptor”, according to Buck Woody (Website | @buckwoody). This means that everything is your fault by default. Server broke down? Your fault! Database corrupt? Your fault! Query of a user doesn’t compile because of a syntax error? Yeah, you guessed right… Your fault!

But on the other hand, you have a lot of opportunities to find out the best practices of doing things. An example of that is moving a system database. About two weeks ago we decided to order 4 SSD’s for our SQL Server. We plan to store tempdb and the SSAS data on these disks, hoping that it will reduce resource costs on our environment.

So with no experience of moving system databases, I started thinking about how to do this. You probably need to stop the SQL Server, move the MDF and LDF files, change the start-up options of SQL Server, start the service, hope that SQL Server finds the new location, etc. But after a quick peek I found a much simpler solution: just modify the current file location!

 
Check the current location and file sizes
Before moving your database (in this case I’m moving my tempdb), run the query below, and store the result just in case all goes south:

SELECT name, physical_name, state_desc, (size * 8 / 1024.00) AS InitialSize
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

 
The reason you also want to store the initial sizes, is that if you restart the SQL Service (one of the next steps), SQL Server will set the files to the default file sizes. And you don’t want to run on those default settings of course!

 
Set the new file location
You can set the new file location for your tempdb, by running the query below. In this example I’m moving my datafiles to the D:\ volume of my machine:

USE master
GO

ALTER DATABASE tempdb 
	MODIFY FILE (NAME = tempdev, FILENAME = 'D:\MSSQL\DATA\tempdb.mdf')
GO
ALTER DATABASE tempdb 
	MODIFY FILE (NAME = templog, FILENAME = 'D:\LOG\templog.ldf')
GO

 
After executing this statement, you’ll see a message like this appear in the Messages window:

 

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.
The file “templog” has been modified in the system catalog. The new path will be used the next time the database is started.

 
So the file location is altered, but the running values are not changed until your machine is rebooted, or the SQL Service is restarted.

Now just restart the SQL Service (or the machine if you like to), and run the first query again. This way you can check if your tempdb is stored in the right folder, and if the initial sizes are correct:

SELECT name, physical_name, state_desc, (size * 8 / 1024.00) AS InitialSize
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

 
The service should stop and start without issues. After the restart you’ll see that SQL Server created a new MDF and LDF file at the new file location. After a successful restart, you can delete the MDF and LDF files from the old location.

 
Now, was that so hard?
So as you can see, not all changes in SQL Server are rocket science. One thing I’ve learned, is that from now on, I’m not going to assume the worst, and hope for the best!

 
UPDATE
As Pieter Vanhove (Blog | @Pieter_Vanhove) mentions in his tweets about msdb/model and master, in some cases you need to do a little bit more work. Because the tempdb is the database with the least probable cause of breaking SQL Server (it’s recreated if the SQL service starts), you can move it by changing the settings.

On the restart, the “Running values” (current settings) are overwritten by the “Configured values” (new settings) you set with the query you ran, and tempdb is recreated.

But the other system databases require a little bit more effort. If you want to move master, model or msdb, check out this link.

And thank you Pieter, for pointing out this stupid mishap to me!

T-SQL Tuesday #39 – Can you shell what the PoSH is Cooking?

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Can you shell what the PoSH is Cooking?”. If you want to read the opening post, please click the image below to go to the party-starter: Wayne Sheffield (Blog | @DBAWayne).



A few months ago I attended a session of Jeff Wouters (Blog | @JeffWouters) about PowerShell. He talked about how powerful PowerShell really is, and showed us some examples. This was my first encounter with the “commandline on steroids”. Back then I didn’t think I’d use that any time soon, but this is changing fast! The wide variety of possibilities that PowerShell offers made me curious, but I never found the time and use for it. Until last week…

A great number of things have changed the last few weeks, and that gave me the opportunity to play around with PowerShell. Starting with reading a tutorial and creating my first few trial scripts, it soon hit me that it isn’t that easy. But on the other side, if you succeed at writing a script, it’s very powerful!

PowerShell ISE
One of the very first things I tried after staring the PowerShell ISE (the “studio” in which you can write your scripts) was trying to find all objects in my database. The first thing you need to do is:

#Set the Execution-Policy to Unrestricted
Set-ExecutionPolicy Unrestricted

#Run the Import-Module below once, so you can use the SQLSERVER functionality
Import-Module SQLPS

This will allow you to access your SQL Server objects. After that, you can start with the easy stuff like:

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\Tables

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\Views

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\StoredProcedures

dir SQLSERVER:sql\localhost\SQL2012\databases\Sandbox\Users

This is very basic, and real easy to find out how this works. From Powershell, the results will look like this:

And from the “PowerShell ISE”, it will look like this:

Another thing I tried is to automatically create a database on my local server:

[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") 
[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") 

$serverInstance = "(local)\SQL2012"
$conn = New-Object -TypeName Microsoft.SqlServer.Management.Common.ServerConnection -ArgumentList $serverInstance 
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -ArgumentList $conn

$newdb = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database($server, "PowerShellToTheRescue")
$newdb.Create()

This script will create a new database called “PowerShellToTheRescue” on your local SQL Server. I assume it’s also possible to create tables in the database, but I haven’t found the time to test that.

PowerShell trough SQL Server Management Studio (SSMS)
Another option is to start PowerShell via a menu in SSMS. This means PowerShell is starting with the current database as default context. So then you don’t have to use the “Import-Module”, and you can start running your query right away. You can find this option by right-clicking your database, and choose “Start PowerShell”:

So, what are you saying?!
This months T-SQL Tuesday post isn’t that impressive, I know! But what I’m trying to say is that if you try PowerShell, you might find that it’s quite powerful and fun.

And thinking about it, I guess you can expect some more PowerShell posts from me in the next few weeks! ;)

Strange behavior of spatial data

As of today, I’m kind of forced to admit I have a problem… I’m in love with spatial data. And once you’re hooked, there’s no turning back. It could be worse of course! And in these circumstances, you come across the most interesting cases…

After trying to draw a geometry polygon in SQL Server 2008, I wondered what the difference is between a polygon and a multipolygon. Polygon also accepts more then 1 polygon just like a multipolygon. So what’s the difference then? I think I found the difference, with the help of Andreas Wolter (Blog | @AndreasWolter).

Just look at the query below:

DECLARE @Obj GEOMETRY
SET @Obj = GEOMETRY::STGeomFromText('POLYGON((10 0, 10 10, 0 10, 0 0, 10 0),
											 (10 15, 10 25, 0 25, 0 15, 10 15, 10 15))'
									,4326)

SELECT @Obj.ToString(), @Obj

This query produces a valid set of polygons, even though it’s two objects in a single polygon. It draws the polygons, even though a polygon should only consist of 1 object, and not two as the example above.

This isn’t the weirdest I’ve seen. Just look at the example below:

DECLARE @Obj GEOMETRY
SET @Obj = GEOMETRY::STGeomFromText('POLYGON((10 0, 10 10, 0 10, 0 0, 10 0),
											 (10 15, 10 25, 0 25, 0 15, 10 15, 10 15))'
									,4326)

--==================================================

SELECT
	'Geo Object'					AS Description,
	@Obj							AS GeoObject,
	@Obj.ToString()					AS GeoObject_ToString,
	@Obj.STIsValid()				AS GeoObject_IsValid

UNION ALL

SELECT
	'Geo Object + MakeValid()'		AS Description,
	@Obj.MakeValid()				AS GeoObject,
	@Obj.MakeValid().ToString()		AS GeoObject_ToString,
	@Obj.MakeValid().STIsValid()	AS GeoObject_IsValid

If you run the example, you’ll see a description of the objects, the spatial object itself, the object ToString(), and a bit (boolean) for STValid(). The first record in the resultset is just the same as in the previous example. The second row contains the method .MakeValid().

As you see the first record (2 objects in 1 polygon) is not a valid geometry object. The second records shows that .MakeValid() converts your polygon into a multipolygon. And if you check if the multipolygon is valid, it returns true.

This example was getting weird, but now run the example on a SQL 2012 instance. You will see a difference in the multipolygon coordinates. I’ve ran them on both versions, and you can see the differences in the screenshot below:

SQL Server 2008:

SQL Server 2012:

The conversion to multipolygon causes a different rounding of coordinates. It seems like the converison in SQL 2008 is changed quite a bit for SQL 2012.

So what does this mean for the usage of geometry and geography data? I don’t know… This might mean we always need to use .MakeValid() before storing a polygon. Part of the issue is, that spatial data is pretty new stuff. This means that there are no best practices yet. So time will tell what the best practices will become…

Using SQLCMD to your advantage

A few weeks ago, I came across something called SQLCMD (or SQLCommand). I’ve never heard of this, so the curious developer in me wanted to know what it was. The official definition according to MSDN is this:

The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command prompt, in Query Editor in SQLCMD mode, in a Windows script file or in an operating system (Cmd.exe) job step of a SQL Server Agent job. This utility uses ODBC to execute Transact-SQL batches.

You can either run a SQLCMD query via command-line or in SQL Server Management Studio (SSMS). If you want to run it via SSMS, open a new query and press the hotkey combination: ALT+Q, ALT+M. You could also use the menu Query -> SQLCMD Mode.

Once you’ve done that, you can run queries in SQLCMD mode. This means you can use your query window as a commandline tool. The advantage of this, is that you can query multiple servers and instances in one query.

To run queries in SQLCMD mode you need the commands listed below:

Command Parameters Description
:!! <command> Directly executes a cmd command from SQLCMD
:CONNECT <server>(\instance) Connects to the specified servers default or specified instance
  [-l timeout]  
  [-U user]  
  [-P password]  
:ERROR <destination> Redirects error output to a file, stderr or stdout
:EXIT   Quits SQLCMD immediately
  (<query>) Executes the specified query and returens numeric result
GO [n] Executes the specified query (parameter: x times)
:ONERROR <exit / ignore> Specifies which action to take if the query encounteres error
:OUT <filename> Redirects query output to a file, stderr or stdout
  [stderr / stdout]  
: PERFTRACE <filename> Redirects timing output to a file, stderr or stdout
  [stderr / stdout]  
:QUIT   Quits SQLCMD immediately
:R <filename> Append a file to statement cache (ready to execute)
:RESET   Discards the statement cache (reset session)
:SERVERLIST   Lists local and network SQL servers
:SETVAR <varname> <"value"> Sets a SQLCMD scripting variable

Remember, not all commands are listed above, but just the once I found usefull at this time. Parameters listed in are mandatory, and in [ ] are optional.

To write all queries out would be too much, so I created some scripts for you to download. Here’s a list of files you can download and try on your local SQL server:

Get database info with screen output, from multiple SQL servers
Execute script file, and output to file
Get directory content via commandline
Collection of commands in 1 script file
Script file used in the example above

The first script file is the most interesting if you ask me. In this script, I connect to multiple instances of SQL server to retrieve data. You can use this instead of a Multiserver query I blogged about earlier. Instead of registering a group of servers and running your query against that, you can now specify the servers you want by server name or IP-address.

With this information you could start writing your own SQLCMD queries. If you want, you can do this via SSMS or command line. The command line utility can be found at this location:

C:\Program Files\Microsoft SQL Server\100\Tools\Binn\SQLCMD.exe

If you have any questions or need some help, just contact me. I’d love to help you out! :)

Follow

Get every new post delivered to your Inbox.

Join 53 other followers