T-SQL Tuesday #51 – Place Your Bets

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Place Your Bets”. If you want to read the opening post, please click the image below to go to the party-starter: Jason Brimhall (Blog | @sqlrnnr).



 
When I read about this months T-SQL Tuesday topic, the first thing that came to mind was things that you know will go wrong sooner or later. When you encounter a situation like this, you immediately know this can’t last forever. You want to fix it when you see it, but there’s no money, or there’s no time at that moment. But they promise you, in a few weeks you can take all the time you need. Well, that’ll never happen. Until things go wrong, and you can clean up the mess. Sounds familiar? Yes, we’ve all seen this, or will see this sooner or later.

 
With power comes great responsibility
Just imagine this with me. One of your colleagues asks you to look at a problem he’s having with a script someone in your company wrote. You probably solved it while he was standing right next to you. He watches you solve the problem, and when it’s solved, he walks away with a thousand-yard stare in his eyes. You don’t really think about it when it happens, but it’ll come to you…

A few weeks later, it’s 10 AM and you’re still having your first coffee of the day, the same developer asks you to look at “his script”. Wait, what?! Yes, he watched you work your magic, and that funny language of “Es-Que-El” seemed easy to learn. So he bought himself a “SQL Server for dummies”, learned all he needs to know in only a weekend, and wonders why it took you so long to learn it. From now on, he can write his own scripts, so he doesn’t need you anymore. Except for this last time.

Opening the script scares you: it’s a cursor. But in your frustration and amazement you “fix” the broken script, by refactoring his select statement in the cursor. Because the cursor only collects data, you add a “TOP 10″ clause in the select statement, and run the script as test. Nice, it finishes is 25 seconds. “It will only consume 500 rows” is the last thing you heard him say. You send the guy off, so you can continue your own work.

Later in the day, it’s about 4 PM, you meet the same guy at the coffee machine. He starts a discussion about how he needs a new PC, because the script YOU wrote is slow (see where this is going…?). It’s running for about 4 hours now, while it should only collect about 500 records. I know what you think: that’s impossible. You walk with him to his desk, stop the script, and look at his code. That isn’t the query you looked at this morning. Asking your colleague about it explains it all: he “slightly refactored” the script, because he didn’t need al those weird statements to get him his results. Well, after a fiery discussion of a few minutes, you explain him the DOES need the “FETCH NEXT” in the query, because the query now ran the same statement for only the first record in the select statement you declared for your cursor.

So this funny “Es-Que-El” language, isn’t that easy to learn. A beautiful quote about that, and I’m not sure who said that, says: “T-SQL is easy to learn, but hard to master”. So putting your money on one horse, in this case buying yourself a book, isn’t a good idea.

 
Putting your money on one color
Another great example is a company that had a wonderful Business Intelligence environment. They used the whole nine yards: SQL Server, SSIS, SSAS, SSRS, etc. The downside of that you ask? It was all hosted on 1 physical machine, on a single SQL Server instance. Oh, and it was running low on disk space, and there was no room in the chassis to put in extra disks. That’s right: it was like juggling burning chainsaws with only one hand. Or an interesting challenge, if you will.

Eventually we hosted a few databases on NAS volumes. At that point, I was told the databases we moved were less important. Pro tip: never EVER trust them when they say that!!! They forgot to tell me the biggest database of the moved databases wasn’t in the backup plan (500 GB database takes a long time to backup), and the last backup was made over a year ago. Surprise, one night the network card failed for maybe only a microsecond, and SQL Server thought the LUN was offline or the disk crashed. So SQL Server said that the database was corrupt, and that the datafiles were unavailable. After a few hours, a reboot of the server fixed it, and SQL Server could see the disk volumes again. So the database was saved after all.

But you see where I’m going with this? You never know when things go wrong, and putting all your money on one color when playing roulette isn’t the best idea. If the hardware of your single server fails, you fail.

 
Next, Next, Finish?
But the biggest example I can give you of a bad placed bet, are companies that work with SQL Server, but don’t hire a DBA. Have you ever worked for a company that work with Oracle? Every single company that works with Oracle, has a dedicated Oracle DBA. But have you ever wondered why that isn’t the case when a company works with SQL Server?

Thinking about it, I guess this is because a successful SQL Server installation is only a few “Next, Next, Finish”-mouse clicks away. So if the installation is so easy, every developer or person with IT experience can administer it probably. They couldn’t be more wrong. You know that, I know that, every SQL Server professional knows that, but try to convince other people of that fact.

So the worst bet you can place, and this is how I write myself back to the subject of this month, is not hiring a professional to manage your data and data stores. You wouldn’t let your local baker fix your car, because the wrote some books about cars, right? So why do you let a developer with basic knowledge near your SQL Server? Just because real DBA’s cost money? Yes, we do cost some serious money. But in the end, at least when you hire a GOOD DBA, they will make you money. You don’t think so? What does a DBA cost per hour? And how much money do you lose when your servers are down for just an hour?

Moving system database: rocket science or piece of cake?

Being a DBA often makes you the “Default Blame Acceptor”, according to Buck Woody (Website | @buckwoody). This means that everything is your fault by default. Server broke down? Your fault! Database corrupt? Your fault! Query of a user doesn’t compile because of a syntax error? Yeah, you guessed right… Your fault!

But on the other hand, you have a lot of opportunities to find out the best practices of doing things. An example of that is moving a system database. About two weeks ago we decided to order 4 SSD’s for our SQL Server. We plan to store tempdb and the SSAS data on these disks, hoping that it will reduce resource costs on our environment.

So with no experience of moving system databases, I started thinking about how to do this. You probably need to stop the SQL Server, move the MDF and LDF files, change the start-up options of SQL Server, start the service, hope that SQL Server finds the new location, etc. But after a quick peek I found a much simpler solution: just modify the current file location!

 
Check the current location and file sizes
Before moving your database (in this case I’m moving my tempdb), run the query below, and store the result just in case all goes south:

SELECT name, physical_name, state_desc, (size * 8 / 1024.00) AS InitialSize
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

 
The reason you also want to store the initial sizes, is that if you restart the SQL Service (one of the next steps), SQL Server will set the files to the default file sizes. And you don’t want to run on those default settings of course!

 
Set the new file location
You can set the new file location for your tempdb, by running the query below. In this example I’m moving my datafiles to the D:\ volume of my machine:

USE master
GO

ALTER DATABASE tempdb 
	MODIFY FILE (NAME = tempdev, FILENAME = 'D:\MSSQL\DATA\tempdb.mdf')
GO
ALTER DATABASE tempdb 
	MODIFY FILE (NAME = templog, FILENAME = 'D:\LOG\templog.ldf')
GO

 
After executing this statement, you’ll see a message like this appear in the Messages window:

 

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.
The file “templog” has been modified in the system catalog. The new path will be used the next time the database is started.

 
So the file location is altered, but the running values are not changed until your machine is rebooted, or the SQL Service is restarted.

Now just restart the SQL Service (or the machine if you like to), and run the first query again. This way you can check if your tempdb is stored in the right folder, and if the initial sizes are correct:

SELECT name, physical_name, state_desc, (size * 8 / 1024.00) AS InitialSize
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

 
The service should stop and start without issues. After the restart you’ll see that SQL Server created a new MDF and LDF file at the new file location. After a successful restart, you can delete the MDF and LDF files from the old location.

 
Now, was that so hard?
So as you can see, not all changes in SQL Server are rocket science. One thing I’ve learned, is that from now on, I’m not going to assume the worst, and hope for the best!

 
UPDATE
As Pieter Vanhove (Blog | @Pieter_Vanhove) mentions in his tweets about msdb/model and master, in some cases you need to do a little bit more work. Because the tempdb is the database with the least probable cause of breaking SQL Server (it’s recreated if the SQL service starts), you can move it by changing the settings.

On the restart, the “Running values” (current settings) are overwritten by the “Configured values” (new settings) you set with the query you ran, and tempdb is recreated.

But the other system databases require a little bit more effort. If you want to move master, model or msdb, check out this link.

And thank you Pieter, for pointing out this stupid mishap to me!

Create XMLA file for OLAP Cubes

In order to refresh your OLAP Cubes (SQL Server Analysis Services), you need to create XMLA files. These XMLA files can be used in a SQL Server Agent Job, so that the Cubes are refreshed and re-processed. To accomplish this, you need to follow the next steps:

1) Deploy your Cube to your SQL Server
2) Right click on the database, and choose “Script database as” –> “Alter To”
3) Add the following node to the top of the XML:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

4) Change the “Alter” node into the following node:

<Alter AllowCreate="true" ObjectExpansion="ExpandFull">

5) Add the following nodes to the bottom of the XML:

<Parallel>
  <Process>
   <Type>ProcessFull</Type>
   <Object>
    <DatabaseID>SandBox</DatabaseID>
   </Object>
  </Process>
 </Parallel>
</Batch>

6) Change the “DatabaseID” in the nodes of Step 5 to the Cube name you deployed
7) In your SQL Job, choose the “Type” of task “SQL Server Integration Services Package”
 
So from now on, your SQL Job will re-create the SSAS Cube, and process it. But be aware, every time you change your Cube, change your XMLA file as well!!!

Follow

Get every new post delivered to your Inbox.

Join 41 other followers