T-SQL Tuesday #51 – Place Your Bets

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Place Your Bets”. If you want to read the opening post, please click the image below to go to the party-starter: Jason Brimhall (Blog | @sqlrnnr).



 
When I read about this months T-SQL Tuesday topic, the first thing that came to mind was things that you know will go wrong sooner or later. When you encounter a situation like this, you immediately know this can’t last forever. You want to fix it when you see it, but there’s no money, or there’s no time at that moment. But they promise you, in a few weeks you can take all the time you need. Well, that’ll never happen. Until things go wrong, and you can clean up the mess. Sounds familiar? Yes, we’ve all seen this, or will see this sooner or later.

 
With power comes great responsibility
Just imagine this with me. One of your colleagues asks you to look at a problem he’s having with a script someone in your company wrote. You probably solved it while he was standing right next to you. He watches you solve the problem, and when it’s solved, he walks away with a thousand-yard stare in his eyes. You don’t really think about it when it happens, but it’ll come to you…

A few weeks later, it’s 10 AM and you’re still having your first coffee of the day, the same developer asks you to look at “his script”. Wait, what?! Yes, he watched you work your magic, and that funny language of “Es-Que-El” seemed easy to learn. So he bought himself a “SQL Server for dummies”, learned all he needs to know in only a weekend, and wonders why it took you so long to learn it. From now on, he can write his own scripts, so he doesn’t need you anymore. Except for this last time.

Opening the script scares you: it’s a cursor. But in your frustration and amazement you “fix” the broken script, by refactoring his select statement in the cursor. Because the cursor only collects data, you add a “TOP 10″ clause in the select statement, and run the script as test. Nice, it finishes is 25 seconds. “It will only consume 500 rows” is the last thing you heard him say. You send the guy off, so you can continue your own work.

Later in the day, it’s about 4 PM, you meet the same guy at the coffee machine. He starts a discussion about how he needs a new PC, because the script YOU wrote is slow (see where this is going…?). It’s running for about 4 hours now, while it should only collect about 500 records. I know what you think: that’s impossible. You walk with him to his desk, stop the script, and look at his code. That isn’t the query you looked at this morning. Asking your colleague about it explains it all: he “slightly refactored” the script, because he didn’t need al those weird statements to get him his results. Well, after a fiery discussion of a few minutes, you explain him the DOES need the “FETCH NEXT” in the query, because the query now ran the same statement for only the first record in the select statement you declared for your cursor.

So this funny “Es-Que-El” language, isn’t that easy to learn. A beautiful quote about that, and I’m not sure who said that, says: “T-SQL is easy to learn, but hard to master”. So putting your money on one horse, in this case buying yourself a book, isn’t a good idea.

 
Putting your money on one color
Another great example is a company that had a wonderful Business Intelligence environment. They used the whole nine yards: SQL Server, SSIS, SSAS, SSRS, etc. The downside of that you ask? It was all hosted on 1 physical machine, on a single SQL Server instance. Oh, and it was running low on disk space, and there was no room in the chassis to put in extra disks. That’s right: it was like juggling burning chainsaws with only one hand. Or an interesting challenge, if you will.

Eventually we hosted a few databases on NAS volumes. At that point, I was told the databases we moved were less important. Pro tip: never EVER trust them when they say that!!! They forgot to tell me the biggest database of the moved databases wasn’t in the backup plan (500 GB database takes a long time to backup), and the last backup was made over a year ago. Surprise, one night the network card failed for maybe only a microsecond, and SQL Server thought the LUN was offline or the disk crashed. So SQL Server said that the database was corrupt, and that the datafiles were unavailable. After a few hours, a reboot of the server fixed it, and SQL Server could see the disk volumes again. So the database was saved after all.

But you see where I’m going with this? You never know when things go wrong, and putting all your money on one color when playing roulette isn’t the best idea. If the hardware of your single server fails, you fail.

 
Next, Next, Finish?
But the biggest example I can give you of a bad placed bet, are companies that work with SQL Server, but don’t hire a DBA. Have you ever worked for a company that work with Oracle? Every single company that works with Oracle, has a dedicated Oracle DBA. But have you ever wondered why that isn’t the case when a company works with SQL Server?

Thinking about it, I guess this is because a successful SQL Server installation is only a few “Next, Next, Finish”-mouse clicks away. So if the installation is so easy, every developer or person with IT experience can administer it probably. They couldn’t be more wrong. You know that, I know that, every SQL Server professional knows that, but try to convince other people of that fact.

So the worst bet you can place, and this is how I write myself back to the subject of this month, is not hiring a professional to manage your data and data stores. You wouldn’t let your local baker fix your car, because the wrote some books about cars, right? So why do you let a developer with basic knowledge near your SQL Server? Just because real DBA’s cost money? Yes, we do cost some serious money. But in the end, at least when you hire a GOOD DBA, they will make you money. You don’t think so? What does a DBA cost per hour? And how much money do you lose when your servers are down for just an hour?

Error deleting checkpoint file

One of the many tasks of being a DBA is to monitor the SQL Server. This doesn’t mean you only keep an eye on daily performance, but you need to monitor the whole process. This means you need to monitor jobs, ad-hoc queries, maintenance plans, etc. But what if you come across an error, and nothing fails…?

 
Finding the error
A few weeks ago I worked on a software release on my SQL Server environment, and the BI developers needed an extra copy of a database for their tests. We didn’t want to test the new functionality on the production database, but the test needed production data. So I decided to restore a backup of the database under a different name.

After a restore of the database, the developers started working on the database. After a few hours, I encountered an error myself (not related to the backup restore), and decided to check the SQL Server error log. But what I saw there was not the error I was looking for:

 

Error: 3634, Severity: 16, State: 2.
The operating system returned the error ‘3(failed to retrieve text for this error. Reason: 15105)’ while attempting ‘DeleteFile’ on ‘E:\Backup\RestoreCheckpointDB70.CKP’.

The actual error message can be found, by running a command prompt, and starting “net helpmsg 3″ (where 3 is the error number from the message above). The actual error is:

 

The system cannot find the path specified.

This error occurred at the same time my restore finished. Strange… And looking at the location in the error, there was no folder called “Backup”. But then I remembered something…

The week before something went wrong with the backup process. We currently use the default maintenance plans in SQL Server, and something went wrong with the job. Because of a lack of diskspace on the machine, I cleaned up the drive were the backups were dropped. But strangely enough the drive contained 3 backup folders, of which 2 were completely empty. So without properly checking, I deleted 2 of the 3 folders. The only folder left, was the one that was referenced in the maintenance job.

 
Finding the source of the issue
But now, how to solve this issue. Apparently my cleanup didn’t help SQL Server, but it actually harmed it…

During my search, I found several helpful websites, that led me to this registry key:

 

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.[Instance Name]\MSSQLServer

In this registry key, you’ll find the default backup location that SQL Server uses to write (for example) your checkpoint file to. And guess what… You’re right! This was one of the empty folders I deleted.

 
Resolving the issue
Once I knew what caused the issue, I created a new folder with the same name on the location, and added the security permissions needed. But what if you don’t want to do that, or restoring a directory isn’t possible?

There are several other ways to solve an issue like this. For example (for you gutsy ones!) you could edit the registry. No, seriously, please don’t! :)

One of the ways to change this, is by right-clicking your instance name in SSMSS, open the server properties, and chose “Database Settings”:

Another way to change the default locations, is to right-click your instance name in SSMS, and chose “Facets”:

Conclusion
One of the things I’ve learned in the past, and and that I was reminded of is:

When you’re dealing with SQL Server, always check, double check, and triple check the changes you make!

So the same goes for the setting I’ve showed you above! Always check what you change, because some changes will be affected after the next service restart, or machine reboot. So if you change something, and you don’t see the effects immediately, that doesn’t mean it was a safe change!

Moving system database: rocket science or piece of cake?

Being a DBA often makes you the “Default Blame Acceptor”, according to Buck Woody (Website | @buckwoody). This means that everything is your fault by default. Server broke down? Your fault! Database corrupt? Your fault! Query of a user doesn’t compile because of a syntax error? Yeah, you guessed right… Your fault!

But on the other hand, you have a lot of opportunities to find out the best practices of doing things. An example of that is moving a system database. About two weeks ago we decided to order 4 SSD’s for our SQL Server. We plan to store tempdb and the SSAS data on these disks, hoping that it will reduce resource costs on our environment.

So with no experience of moving system databases, I started thinking about how to do this. You probably need to stop the SQL Server, move the MDF and LDF files, change the start-up options of SQL Server, start the service, hope that SQL Server finds the new location, etc. But after a quick peek I found a much simpler solution: just modify the current file location!

 
Check the current location and file sizes
Before moving your database (in this case I’m moving my tempdb), run the query below, and store the result just in case all goes south:

SELECT name, physical_name, state_desc, (size * 8 / 1024.00) AS InitialSize
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

 
The reason you also want to store the initial sizes, is that if you restart the SQL Service (one of the next steps), SQL Server will set the files to the default file sizes. And you don’t want to run on those default settings of course!

 
Set the new file location
You can set the new file location for your tempdb, by running the query below. In this example I’m moving my datafiles to the D:\ volume of my machine:

USE master
GO

ALTER DATABASE tempdb 
	MODIFY FILE (NAME = tempdev, FILENAME = 'D:\MSSQL\DATA\tempdb.mdf')
GO
ALTER DATABASE tempdb 
	MODIFY FILE (NAME = templog, FILENAME = 'D:\LOG\templog.ldf')
GO

 
After executing this statement, you’ll see a message like this appear in the Messages window:

 

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.
The file “templog” has been modified in the system catalog. The new path will be used the next time the database is started.

 
So the file location is altered, but the running values are not changed until your machine is rebooted, or the SQL Service is restarted.

Now just restart the SQL Service (or the machine if you like to), and run the first query again. This way you can check if your tempdb is stored in the right folder, and if the initial sizes are correct:

SELECT name, physical_name, state_desc, (size * 8 / 1024.00) AS InitialSize
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

 
The service should stop and start without issues. After the restart you’ll see that SQL Server created a new MDF and LDF file at the new file location. After a successful restart, you can delete the MDF and LDF files from the old location.

 
Now, was that so hard?
So as you can see, not all changes in SQL Server are rocket science. One thing I’ve learned, is that from now on, I’m not going to assume the worst, and hope for the best!

 
UPDATE
As Pieter Vanhove (Blog | @Pieter_Vanhove) mentions in his tweets about msdb/model and master, in some cases you need to do a little bit more work. Because the tempdb is the database with the least probable cause of breaking SQL Server (it’s recreated if the SQL service starts), you can move it by changing the settings.

On the restart, the “Running values” (current settings) are overwritten by the “Configured values” (new settings) you set with the query you ran, and tempdb is recreated.

But the other system databases require a little bit more effort. If you want to move master, model or msdb, check out this link.

And thank you Pieter, for pointing out this stupid mishap to me!

T-SQL Tuesday #39 – Can you shell what the PoSH is Cooking?

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Can you shell what the PoSH is Cooking?”. If you want to read the opening post, please click the image below to go to the party-starter: Wayne Sheffield (Blog | @DBAWayne).



A few months ago I attended a session of Jeff Wouters (Blog | @JeffWouters) about PowerShell. He talked about how powerful PowerShell really is, and showed us some examples. This was my first encounter with the “commandline on steroids”. Back then I didn’t think I’d use that any time soon, but this is changing fast! The wide variety of possibilities that PowerShell offers made me curious, but I never found the time and use for it. Until last week…

A great number of things have changed the last few weeks, and that gave me the opportunity to play around with PowerShell. Starting with reading a tutorial and creating my first few trial scripts, it soon hit me that it isn’t that easy. But on the other side, if you succeed at writing a script, it’s very powerful!

PowerShell ISE
One of the very first things I tried after staring the PowerShell ISE (the “studio” in which you can write your scripts) was trying to find all objects in my database. The first thing you need to do is:

#Set the Execution-Policy to Unrestricted
Set-ExecutionPolicy Unrestricted

#Run the Import-Module below once, so you can use the SQLSERVER functionality
Import-Module SQLPS

This will allow you to access your SQL Server objects. After that, you can start with the easy stuff like:

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\Tables

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\Views

dir SQLSERVER:\sql\localhost\SQL2012\databases\Sandbox\StoredProcedures

dir SQLSERVER:sql\localhost\SQL2012\databases\Sandbox\Users

This is very basic, and real easy to find out how this works. From Powershell, the results will look like this:

And from the “PowerShell ISE”, it will look like this:

Another thing I tried is to automatically create a database on my local server:

[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") 
[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") 

$serverInstance = "(local)\SQL2012"
$conn = New-Object -TypeName Microsoft.SqlServer.Management.Common.ServerConnection -ArgumentList $serverInstance 
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -ArgumentList $conn

$newdb = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database($server, "PowerShellToTheRescue")
$newdb.Create()

This script will create a new database called “PowerShellToTheRescue” on your local SQL Server. I assume it’s also possible to create tables in the database, but I haven’t found the time to test that.

PowerShell trough SQL Server Management Studio (SSMS)
Another option is to start PowerShell via a menu in SSMS. This means PowerShell is starting with the current database as default context. So then you don’t have to use the “Import-Module”, and you can start running your query right away. You can find this option by right-clicking your database, and choose “Start PowerShell”:

So, what are you saying?!
This months T-SQL Tuesday post isn’t that impressive, I know! But what I’m trying to say is that if you try PowerShell, you might find that it’s quite powerful and fun.

And thinking about it, I guess you can expect some more PowerShell posts from me in the next few weeks! ;)

Strange behavior of spatial data

As of today, I’m kind of forced to admit I have a problem… I’m in love with spatial data. And once you’re hooked, there’s no turning back. It could be worse of course! And in these circumstances, you come across the most interesting cases…

After trying to draw a geometry polygon in SQL Server 2008, I wondered what the difference is between a polygon and a multipolygon. Polygon also accepts more then 1 polygon just like a multipolygon. So what’s the difference then? I think I found the difference, with the help of Andreas Wolter (Blog | @AndreasWolter).

Just look at the query below:

DECLARE @Obj GEOMETRY
SET @Obj = GEOMETRY::STGeomFromText('POLYGON((10 0, 10 10, 0 10, 0 0, 10 0),
											 (10 15, 10 25, 0 25, 0 15, 10 15, 10 15))'
									,4326)

SELECT @Obj.ToString(), @Obj

This query produces a valid set of polygons, even though it’s two objects in a single polygon. It draws the polygons, even though a polygon should only consist of 1 object, and not two as the example above.

This isn’t the weirdest I’ve seen. Just look at the example below:

DECLARE @Obj GEOMETRY
SET @Obj = GEOMETRY::STGeomFromText('POLYGON((10 0, 10 10, 0 10, 0 0, 10 0),
											 (10 15, 10 25, 0 25, 0 15, 10 15, 10 15))'
									,4326)

--==================================================

SELECT
	'Geo Object'					AS Description,
	@Obj							AS GeoObject,
	@Obj.ToString()					AS GeoObject_ToString,
	@Obj.STIsValid()				AS GeoObject_IsValid

UNION ALL

SELECT
	'Geo Object + MakeValid()'		AS Description,
	@Obj.MakeValid()				AS GeoObject,
	@Obj.MakeValid().ToString()		AS GeoObject_ToString,
	@Obj.MakeValid().STIsValid()	AS GeoObject_IsValid

If you run the example, you’ll see a description of the objects, the spatial object itself, the object ToString(), and a bit (boolean) for STValid(). The first record in the resultset is just the same as in the previous example. The second row contains the method .MakeValid().

As you see the first record (2 objects in 1 polygon) is not a valid geometry object. The second records shows that .MakeValid() converts your polygon into a multipolygon. And if you check if the multipolygon is valid, it returns true.

This example was getting weird, but now run the example on a SQL 2012 instance. You will see a difference in the multipolygon coordinates. I’ve ran them on both versions, and you can see the differences in the screenshot below:

SQL Server 2008:

SQL Server 2012:

The conversion to multipolygon causes a different rounding of coordinates. It seems like the converison in SQL 2008 is changed quite a bit for SQL 2012.

So what does this mean for the usage of geometry and geography data? I don’t know… This might mean we always need to use .MakeValid() before storing a polygon. Part of the issue is, that spatial data is pretty new stuff. This means that there are no best practices yet. So time will tell what the best practices will become…

Using SQLCMD to your advantage

A few weeks ago, I came across something called SQLCMD (or SQLCommand). I’ve never heard of this, so the curious developer in me wanted to know what it was. The official definition according to MSDN is this:

The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command prompt, in Query Editor in SQLCMD mode, in a Windows script file or in an operating system (Cmd.exe) job step of a SQL Server Agent job. This utility uses ODBC to execute Transact-SQL batches.

You can either run a SQLCMD query via command-line or in SQL Server Management Studio (SSMS). If you want to run it via SSMS, open a new query and press the hotkey combination: ALT+Q, ALT+M. You could also use the menu Query -> SQLCMD Mode.

Once you’ve done that, you can run queries in SQLCMD mode. This means you can use your query window as a commandline tool. The advantage of this, is that you can query multiple servers and instances in one query.

To run queries in SQLCMD mode you need the commands listed below:

Command Parameters Description
:!! <command> Directly executes a cmd command from SQLCMD
:CONNECT <server>(\instance) Connects to the specified servers default or specified instance
  [-l timeout]  
  [-U user]  
  [-P password]  
:ERROR <destination> Redirects error output to a file, stderr or stdout
:EXIT   Quits SQLCMD immediately
  (<query>) Executes the specified query and returens numeric result
GO [n] Executes the specified query (parameter: x times)
:ONERROR <exit / ignore> Specifies which action to take if the query encounteres error
:OUT <filename> Redirects query output to a file, stderr or stdout
  [stderr / stdout]  
: PERFTRACE <filename> Redirects timing output to a file, stderr or stdout
  [stderr / stdout]  
:QUIT   Quits SQLCMD immediately
:R <filename> Append a file to statement cache (ready to execute)
:RESET   Discards the statement cache (reset session)
:SERVERLIST   Lists local and network SQL servers
:SETVAR <varname> <"value"> Sets a SQLCMD scripting variable

Remember, not all commands are listed above, but just the once I found usefull at this time. Parameters listed in are mandatory, and in [ ] are optional.

To write all queries out would be too much, so I created some scripts for you to download. Here’s a list of files you can download and try on your local SQL server:

Get database info with screen output, from multiple SQL servers
Execute script file, and output to file
Get directory content via commandline
Collection of commands in 1 script file
Script file used in the example above

The first script file is the most interesting if you ask me. In this script, I connect to multiple instances of SQL server to retrieve data. You can use this instead of a Multiserver query I blogged about earlier. Instead of registering a group of servers and running your query against that, you can now specify the servers you want by server name or IP-address.

With this information you could start writing your own SQLCMD queries. If you want, you can do this via SSMS or command line. The command line utility can be found at this location:

C:\Program Files\Microsoft SQL Server\100\Tools\Binn\SQLCMD.exe

If you have any questions or need some help, just contact me. I’d love to help you out! :)

Multiserver Query

As a developer it’s common practice that you work on a server farm with 2 or more servers, and a clustered or redundant production environment. Sometimes it’s necessary to run certain queries on all the machines you’re developing on or that you’re maintaining. For example if you want to know if all your instances run the same version (and/or Service Pack) of SQL Server. One of the options you’ve got is to run the query in multiple tabs or windows; one for each server or instance. This seems okay for 2 or 3 instances, but not for an entire OLAP environment.

Luckily the SQL Server team has build in the opportunity to run a query against multiple instances simultaneously. This can be accomplished by running a Multiserver query. It will run your query on a complete group of SQL Server instances. In the example below, I retrieve the version of each instance I connect to. I’ve done this by using the @@VERSION function that is shipped with SQL Server.

First, I’ve created a group of servers in the Registered Servers window in SSMS:

This group contains our development instances. These instances run on 1 hardware-platform, which contains different virtual machines. Each development team has it’s own machine with dedicated SQL Server instance.

If you right-click on the server group, you choose “New Query”:

If the new query window opens, you will see a different status bar. With the default settings of SSMS set, the bar will change to a pink color:

Also in the left corner of the status bar, you can see how many instances are in the group, and how much of these instances are (still) running. In my case, all six instances in the group are running, and will return the result.

For this example I used the query:

SELECT @@VERSION AS VersionInfo

The @@VERSION function returns the version, processor architecture, build date and operating system for the current installation of SQL Server. For more info, see the MSDN article. Also, by default the servername you used to register the server will be shown:

As you see, all of our development machines run on the same version. That’s because we copied the instances from 1 base-image. As an example, I ran the same query on my local server group:

These are obviously different versions: I have a SQL Server 2008 R2 and 2012 Developer instance running on my laptop.

If you want to, you can change the options for Multiserver queries. For example, you can choose NOT to merge all results into 1 result set. You can do this via Tools -> Options -> Query Results -> SQL Server -> Multiserver Results. There you can change these options:

Only one remarkt is left. Writing this post I tried several things, but one thing I still don’t understand: why is the result ordered differently every time you run a Multiserver query? This might be because the results of the different instances are collected, and merged together to return a single result set. I’m not sure about this though! So if you know the answer to this question, please let me know!

Error: Permissions denied mssqlsystemresource – configurations

Last week I encountered a strange issue on a production SQL Server. We weren’t able to connect to some of them. If we tried to connect with a normal Windows NT (Active Directory) or SQL account, we got the following error (I recreated the error locally):

Message: The SELECT permission was denied on the object ‘configurations’, database ‘mssqlsystemresource’, schema ‘sys’. (Microsoft SQL Server, Error: 229)

For further information about the mssqlsystemresource database, please read my previous post about this database.

But in my case, there was nothing wrong with this Resource database. After all, no other database was unavailable or corrupt. So I needed a little bit more research. After a while I found out that the issue is coming from a security setting.

We allowed an external company of Data Warehouse consultants (KVL) to access our production SQL Server. They needed to create a Snapshot, and the person responsible for making this possible created 2 Stored Procedures (SP’s) for this task. One SP to create a Snapshot, and one to drop the Snapshot. These SP’s are stored in the Master database.

But because he wanted them to only access the SP’s, the created a user for this (again, created this locally):

And he added a server mapping for the user:

After that he tested it, and it worked like a charm! He e-mailed me the location of the SP’s and the names, so I could send them to our consultants. Then he added one more thing to the User Mappings without any of us knowing:

When the consultants tried to create the Snapshot, they couldn’t get it to work. After some research I found out that the User Mapping on the Master database was set to db_denydatareader. As you all know, deny permissions always overrule access permissions. In this case this worked against us.

So if you ever encounter this issue, please take a look at your security settings. If all of your other databases are still accessible, the error is coming from a lack of permissions. You’ll encounter this issue most of the time when a user (NT or SQL user) is a member of 2 separate groups with different permissions.

SQL Server system database – mssqlsystemresource

The database mssqlsystemresource is a read-only database that is shipped with SQL Server from version SQL Server 2005. It contains all the system objects that are included in SQL Server. An example of this are the sys.objects. These are stored in the Resource database, but appear in every other database. The sys.objects from a user database refer to the Resource database.

SQL Server cannot backup this database. You can create a backup yourself, but don’t back it up like a normal .MDF file. It also can’t be restored via SQL Server, but you can do it manually.

The Resource database makes upgrading to a new version easier and faster.

The mssqlsystemresource database is invisible for users, even for the System Administrator (sa). The database can be restored by copying the .ldf and .mdf files from the folder “[Drive]\Program Files\Microsoft SQL Server\MSSQL10_50.[Instance Name]\MSSQL\Binn” to a folder of your choice (in this case D:\Databases). After that, start a query with the user sa, and run the following script:

USE [master]
GO

CREATE DATABASE [mssqlsystemresource_RESTORED] ON 
	(FILENAME = N'D:\Databases\mssqlsystemresource.mdf'),
	(FILENAME = N'D:\Databases\mssqlsystemresource.ldf')
FOR ATTACH
GO

Once you’ve done this, you can query the restored version of the Resource database.

SQL Server Install – Best Practices

When installing Microsoft SQL Server, there are a lot of options and settings to think about. For these situations I always have a shortlist of options i need to think of. In this blog post I will try to explain why these options are important, and the reason of my choice.

Disks Separation
If you are picking out your SQL Server hardware, consider the diverse options of hardware and types of storage. In some cases you’re on a budget, and hardware isn’t your main concern. In that case you don’t want to follow this by heart, but take some shortcuts to save some money on your budget.

In my case, I always like to see the following disk separation:

1) OS
2) SQL Server Binary (Server install)
3) Database files (.MDF)
4) Log files (.LDF)
5) Temp DB
6) Backup- / CSV- / Text-Files for import, storage, etc (this one is optional, so only if you really need it)

Why the separate disks? Simple: performance! If you want to have a good performing SQL Server, give it the best hardware settings you can get away with. Especially with the “classic” disks (spinning disks, instead of SSD disks), I/O is 90% of the bottleneck in normal cases.

So why separate the MDF and LDF files? Just to contain the I/O actions to one single disk. If you are executing a big update statement, it saves the data and log to different disks, preventing I/O issues. By separating the disks you get the best performance you can get.

Keep in mind, use separate disks and NOT different partitions! Partitions are virtual disks across physical disks, and can be a first issue in performance drops!

Instance
If the installations asks you to choose between default and named instance, always choose named instance. This ensures that your SQL Server is always reachable by name, even if you install 2 or more different versions of SQL Server on one machine. In practice, I’ve seen multiple versions of SQL Server on one machine, and none of these instances could be reached. This was because both version used the same default TCP/IP port 1433 as default. So I always use [Machine]\SQL[Version] as instance name (example: (local)\SQL2008).

Authentication
If you’re asked about authentication (NT only or mixed mode), I always choose mixed mode. This gives you the ability to use SQL Server logins next to Windows NT users (domain users). This way, your flexible to use NT users or SQL Server logins. In practice you will see that SQL Server logins are often used in applications that only need (read-)permissions on one SQL Server. NT users are mostly used when an application needs permissions on multiple machines across the domain / network.

File locations
During your server installation, there are a few more things you need to check, so that performance is (potentially) increased. One of these steps is making sure that newly created databases are stored in the correct location. You can do this in the installation steps, when you are specifically asked for it. Make sure that all default paths are set correctly, to prevent wrongly placed databases or logs.

OS Tweaks
There are several things in or on the OS you can tweak for more performance, but one of the most important (which people easily forget) is to configure your anti-virus. The software needs to skip MDF, LDF and NDF (Secondary Data Files) as it scans your system. This can be a little performance booster, and can be a pain if you do it after putting your server into production.

Conclusion
This is a short explanation of common practice I’ve seen working with SQL Server for multiple companies. In most cases the common practice is determined by the function of the SQL Server and the budget. So taking a shortcut here and there doesn’t need to be bad. But always keep in mind what the intended function of the machine is, and the load the machine has to keep up with. Putting everything on 2 disks isn’t that bad for a test machine, but if you put everything on 2 disks for a financial production system you’re doing something wrong!

Follow

Get every new post delivered to your Inbox.

Join 41 other followers