T-SQL Tuesday #56 – Assumptions

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Assumptions “. If you want to read the opening post, please click the image below to go to the party-starter: Dev Nambi (Blog | @DevNambi).



 
This months topic is about assumptions. A few years back, I worked in a team that consisted of mainly .NET developers. Every time we mentioned “I think so…”, “I assume it works like this…” or “I think we should…”, one of them used the quote: “Assumption is the mother of all f*ck ups!”, which is a quote from the movie Under Siege 2: Dark Territory. But he was right. The moment you assume something, it’s going to blow up in your face in the end.

 
I tested it, and it works
Working for larger organizations should mean they are more prepared to certain things then smaller organizations. But I’ve seen large organization being badly prepared, or just plain unprepared. They assume their processes work, or they will never encounter failure at all.

One of the companies I worked for, took backups every night. Full backups. Of databases there were between 100 GB and 500+ GB… And they never tested a restore… Why? They used the default maintenance functionality with “Verify Backup Integrity” enabled, and they never needed to restore a database before. So they only took backups because management wanted that. They didn’t understand why, because their processes never failed, and would never fail in the future.

But one day, it wasn’t their processes that failed. A LUN went offline during the ETL process, and SQL Server naturally detected that. SQL Server put a database into suspect mode, because of database corruption. But because there were no backups, they needed to move to plan B: process about 2+ years of data (stored in XML files) again.

Eventually I solved it and recovered the database without the need of a restore, but it scared them. They now saw why they needed backups, and why they needed to test the restore on a regular basis. But they forgot about it after a few days, and we never got the time to change the maintenance processes or test any restore. After that, I made the best choice possible in my opinion: I found myself a new challenge.

 
If you don’t know what you’re talking about…
Another example of assumption I have seen a lot over the years, is people explaining stuff to other people, without any proper knowledge about a certain subject. I can recall a conversation between me and an intern. He was a .NET developer, and had some questions about how a T-SQL feature worked. Another junior BI developer started laughing, when the intern asked his question. “What a stupid question, everybody knows the answer to that!” he said. Kind of irritated by that, I asked him to provide the answer to that question. He didn’t want to. I asked him again: “you answer the question, because you laughed about it, and I want to hear the answer from you.”

He started to stutter, and he explained the functionality all wrong. When I explained it the right way, the .NET intern thanked me, and walked away with his new knowledge, ready to bring it into practice. The BI developer wanted to continue the discussion. “You’re all wrong! That feature doesn’t work that way!”. I nicely told him, that I used this feature on a daily basis, and that he was wrong. The discussion went on a little more, but I stopped the discussion by telling him: “I’m doing this for a number of years now, and working with SQL Server is what I do. You just started, and wrote your first query a few months ago. If you find any resources that show me being wrong, I’ll be happy to quit my job. Until then, please don’t explain T-SQL to other people if you don’t know what you’re talking about.”. Until this day, he never got back to me on this discussion.

 
I don’t need to check that!
Another great assumption you’ll see in several companies, are IT people that trust their own automation a little bit too much. The rule in IT is that if you need to do something more than once or a couple of times, you need to automate it. Automation is a good thing, and it can save you time. But who checks if your process doesn’t fail? You don’t want to build a system, that checks another system for you. One of the things I’ve seen is a developer that created an automated process, that checked a log table on a database server, and mailed new errors to the developer. Looking at this, it’s a perfect solution. You don’t have to monitor the log table by yourself, but an automated process does that for you.

At one moment, an application seemed to fail. It threw exceptions, and the end-users weren’t able to do anything with the application. The developer was called, and he told the users to contact the system administrator, because it must have been a server- or hardware problem. The system administrator called the developer after a few minutes, and told him the server and hardware were in perfect condition. The developer insisted his software wasn’t failing, because he didn’t receive any errors by email. But after a quick check, the developer came to the conclusion his automated process failed. The developer lost a lot of credits because of this attitude. As you see, this is another example of an assumption that went wrong.

 
Never stop asking questions
One of the most important things I wanted to show you with this blog post, is that if you don’t know the answer to a question, don’t be afraid to ask someone. The same goes for processes, tools, functionality, or any other question you want to ask. If people mock you for asking questions, they are the ones that are wrong. You’re just trying to learn and grow, so don’t feel bad about yourself!

T-SQL Tuesday #54 – An Interview Invitation

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “An Interview Invitation “. If you want to read the opening post, please click the image below to go to the party-starter: Boris Hristov (Blog | @BorisHristov).



 
This months topic is about job interviews. Looking back, I can say I had a few interview over the last few years. Not only to get myself at a company, but also to hire new employees for the companies I worked for. Both types of conversations can be very interesting, or get on your nerves very quickly. I’ve seen both…

 
Hiring people
When you work for a company, and you hire new people, you want to make sure you’re talking to the right type of people. Whether or not I’m hiring you, isn’t just bases on technical skills. The way I see it is that technical skills is mostly build from experience and insight, and knowledge can be build up by reading a book, watching a video, or talking to and learning from other people with a similar skill set. The thing I’m personally looking at in the interview is personality, passion for what a person does, and what he or she wants to achieve in the next few years. That will tell you something about the person. I’m not saying the technical skills aren’t that important, but it’s not the only thing I look for. One of the last companies I worked for looks for people they want to drink a beer with on a Friday night. So again, the personality is in some cases as important as technical skills.

 
Interviewing people
Talking to other database professionals can be either very interesting, or very frustrating. One of the interviews I had in the past few years, was to hire a new colleague for a project I was working on. His resume looked good, he had a few years experience on equal projects, and I thought he would make a good addition to the team. When I started the interview, he seemed to be a little nervous. I tried to make him feel a little more comfortable, by asking him about his experience, past projects, and personal life. When he calmed down, we slowly moved the conversation to his technical skills.

Talking about the project, I started to notice he wasn’t giving me the answer I would’ve expected from someone with his experience. Okay, maybe he misunderstood the project. We moved on to common issues we all ran into when working on projects. Every question was answered by “it depends…”, and “I’ve read that…”. Okay, that’s no good. Theoretical knowledge isn’t what I expected. And not long after that, I ended the interview. If you’re too scared to even make a theoretical decision, and defend that point of view, I don’t want you to work on my environment. That shows me that if you make a mistake, and that can happened to even the most experienced employee, you won’t have the guts to tell me you did something wrong. And I want you to tell me that, so we can solve it together, and learn from it.

Another interview I had with a database professional, turned out to be a dull conversation. The person came in to apply for a senior function, and he definitely had the experience for it. His personality was good, and would fit the team perfectly. But every answer we asked him, he answered with a lot of needless words, that ended up in a best-practice answer. The reason I didn’t want to hire him, was his lack of creativity. If you’re spilling answers you read on the internet, or in books, it shows me you’re probably not creative enough to solve the problems you’re encountering. If you’re in a stressful situation, and you need to read a book to think of a solution to the problem, you’re probably not the DBA I’m looking for.

 
Being interviewed
One of the most interesting interviews I had in the last few years, was one of my last job interviews. The first interview went well, where we talked about the company, the team, etc. The IT manager and team leader wanted to talk to me a second time, and that was the most interesting interview I had until now.

The interview started out nice. I talked to the team leader again, and another team member joined the interview. We had a good conversation about technical stuff, the team, projects, etc. When we were approaching the end of the interview, the most interesting part started. I knew the company worked with a consultant company from England, and that the lead consultant visited the office every few weeks. Big surprise, he was there that day. So the moment he walked in, I started to struggle. I needed to switch from my native language Dutch to English, and started an interview with an MCM and MVP. This made me even more nervous, and because of that I stared to doubt every answer I gave. But in the end, I did a reasonable job apparently, because the company hired me. Now I see him every month, and we have some good conversations, even though I’m still afraid to ask him a question sometimes, worrying I’m just asking a dumb question.

 
Maybe the most annoying person is the perfect candidate…
Thinking about hiring a new colleague, you might end up with a pretty big dilemma. The person you want to hire because of his personality, might not be the best choice. The one with the best technical skills might be a better fit, but you don’t like him at all. So when looking for a new colleague or team member, you might end up wondering what’s best for the company, and set your own feelings aside. But when you do, it’s always a gamble with new people. At first, the new colleague had perfect technical skills and was very nice, but in the end his skills aren’t that good or his personality is slightly different than you thought. Hiring people is still a combination of asking the right questions and a gut feeling.

Backup and relax?

Keeping a good backup strategy is a must for every DBA and database developer. But just creating a backup isn’t enough. Perhaps you don’t have enough storage to store the full backups of your database for over a week. Or taking a full backup of a database takes so long, it’s only possible on weekends. So what are your options?

 
RPO and RTO
Your whole backup strategy starts by determining the RPO (Recovery Point Objective) and RTO (Recovery Time Objective). The great Brent Ozar (Blog | @BrentO) wrote a great blog post about these terms, and explains what they mean.

Basically it means that you need to determine what maximum data loss is allowed, and from there you start creating a backup strategy. So how do you determine these? This is how RPO and RTO look like if you visualize them:

 
Storage
Another thing you want to consider is the storage available for your backups. Most of the time the backups will be stored on a NAS (Network-attached storage), and not on the local server, so storage isn’t a big issue in that case.

I’ve also seen companies that created the backup on the local server, and after completion copied it to a network location. In my opinion it’s only one more dependency that you could prevent, but other than that it’s a valid option.

 
Backup types
SQL Server supports multiple backup options. They all have their pros and cons, and give you the ability to create a backup strategy that fits your needs. In this blog post, I’m assuming the database that we work with is created as a full recovery model database.

 
Full backup
A full backup takes a backup of the entire database. With this backup file you’ll be able to recover the entire database, without the need of extra log files. Creating a full backup can take more time, depending on the size of the database. Let’s visualize this with an example:

 
Looking at the example you’ll see that every night a full backup is created. But on Friday night 8 PM the database crashes, and we need to recover from backup. The last full backup was taken 20 hours ago, so those 20 hours of data changes are lost.

 
Differential backup
If you have less time to spend on backing up your database every night, one of your options is to take a differential backup. A differential backup only backs up data that is changed since the last full backup. A differential backup can’t be created without taking a full backup first. If you try to create it without a full copy of the database, SQL Server will throw an error:

 
When you create a full backup, a backup chain is started. This means that SQL Server registers which LSN (Log Sequence Number) was added to the last backup. When you take the next backup, the backup will contain all transactions from the last LSN of the previous backup until the time of your new backup.

A backup chain can’t be started with a differential backup. Also, when you want to restore a differential backup, you need the full backup it’s based on. To put this into perspective, look at the visualization below:

 
At midnight on Monday we create a full backup of the database. Every other day we create a differential backup at midnight. On Friday at 8 PM the database crashes, and we need to restore a backup. All we need is the full backup from Monday morning, and differential backup 4. Although it takes less time to create a differential backup, you see that this hasn’t helped you much. You still lost 20 hours of data.

 
Transaction Log backup
The last major type of backup is a transaction log backup. This backup contains all transactions that were executed after the last full or differential backup were created. This gives you the opportunity to perform the so called “point-in-time recovery”.

Just like the differential backup can’t be created without a full backup, a transaction log backup can’t be created without a full or differential backup first. So the transaction log backup can’t be used to start a backup chain. Let’s take the same example, and add a transaction log backup every 15 minutes (the blue lines represents the transaction log backups):

 
If the database crashed on the same time as the previous examples, your data loss is slimmed down from 20 hours to a maximum of 15 minutes. In order to recover your database, you need the Full backup created on Monday, the differential backup created on Friday, and all transaction log backups created after the differential backup at midnight. So if the database crash occurs a few seconds before the next transaction log backup, the maximum data loss is 15 minutes. Again, without a full or differential backup you can’t create a transaction log backup:

 
Pitfalls
Whenever you create a database, backup that database, and throw away that backup file, you can create a differential or transaction log backup. SQL Server doesn’t require the last full or differential backup (in case of a transaction log backup) to be present. So remember to always check if there is a valid backup available, either on the server or on your backup location.

 
Backup compression
From SQL Server 2008 onward, you can use a new feature called Backup Compression. Whether or not you’re compressing your backup can also make a big difference in performance. A compressed backup is smaller, so it requires less I/O when created, and can increase backup speed significantly. On the other hand, compressing a backup increases CPU usage. So it’s a tradeoff you need to consider. But in some cases, this could solve the problem of having a shortage on storage.

 
Files and Filegroups backup
Creating a file or filegroup backup can be practical when performance or database size is an issue for you. Perhaps taking a backup of your 500GB databases takes to long, and you need to consider other options.

You can backup all filegroups separately, but it’s also possible to combine a number of filesgroups in a single backup. This makes it easier to balance your backups over several disks when you’d like to. But perhaps it’s easier to create a backup that consists of multiple files. This can be achieved by adding more destination files at the bottom of the “create a backup”-GUI. SQL Server than balanced the data across the files you added.

Adding more destination files to your backup can also increase performance. Jes Schultz Borland (Blog | @grrl_geek) wrote a great article about that. She tested several options to see what the impact on performance is.

 
Copy-only backup
One of the most important backup options (in my opinion) is the copy-only backup. This allows you to create an ad-hoc backup without breaking the backup chain.

A copy-only backup works independently from any previous backup or backup plan. So a copy-only backup will not take the last LSN into account, or store the last LSN added to the copy-only backup you’re creating. So if you have a backup plan in place, and you or one of your colleagues needs to create an ad-hoc backup, copy-only is the way to go.

 
Now can we relax?
The short answer is: NO! Your work has only just begun. Now that you have a backup strategy, you need to build it, test it, tune it, and cherish it. Creating the perfect backup strategy isn’t a silver bullet. Your databases change, your processes change, your colleagues change…

So when is the last time you tried to restore a backup from your automated process? You can’t remember? Time to get it done than! You know what they say: A DBA is only as good as his last restore. So if you want to keep working as a DBA for your company, start preparing a test restore now.

 
And then…?
Once you’ve created a backup strategy, the hard work just begins. How are you implementing your backups? Are you planning on using the default SQL Server maintenance plans? Are you building something yourself with SQL Server Agent Jobs and SSIS packages? Maybe you want to buy a solution from a specific vendor you like or know? Well, what about a free solution?

If you’re looking for a cheap way out, building it yourself is a good option. But why not look at the completely free solution by the new MVP Ola Hallengren (Website | @olahallengren)?

It’s a solution used by many of our community members, and won a lot of prizes over the years. Not sure if it’s safe to use? Why don’t you look at the list of companies that use his solution, or read the blog post of Jonathan Kehayias (Blog | @SQLPoolBoy) about it.

Another great resource to start from is the TechNet page about backups. This contains a lot of information about the techniques behind the backup process, and the possible pitfalls you’ll encounter.

 
Conclusion
When creating a backup strategy, you need to take a lot of factors into account. What kind of hardware are you working with? Is the storage you need available? Is it possible to create a full backup every night, or only on weekends?

After building your (custom) solution, you need to spend time on tuning and maintaining it. Your databases aren’t static, and will change every second, every minute, every day. So keep changing your process to perform at it’s best, and in the end, you will create your own free time to spend on cool things.

T-SQL Tuesday #50 – Automation: yea or nay

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Automation”. If you want to read the opening post, please click the image below to go to the party-starter: Hemanth D. (Blog | @SqlChow).



 
Being a DBA, you want to automate as many processes as you can, in order to save time that you can spend on more important things. But have you ever considered over-automating your processes?

 
We’re safe! What can go wrong here?
At one of the companies I worked for, they thought they had everything sorted out. Indexes were rebuild every day or every week (depended on the database), databases and logfiles were shrinked, databases were checked for corruption, backups were running, etc. They felt safe, knowing that if something happened they could anticipate on any situation SQL Server would throw at them. It would blow up in their faces eventually…

One of the first things I checked were the backups. The backup job was running, but only a few databases were actually selected for backup. And the biggest database (500+ GB), which was pretty important, was skipped because it took too long to backup. And guess what, they didn’t EVER test recovering from a backup, because of a lack of disk space and time. And there you have it: a false sense of safety!

I don’t have to tell you not to shrink your database and logfiles. Everybody knows that every time you shrink your database, a kitten dies… Or an index dies… Or the soul of your database… I’m not sure which one, but take your pick. It causes (and I quote Paul Randal (Blog | @PaulRandal) on this!): “*massive* index fragmentation”. Read more about that over at Paul’s blog. Besides that, if your next query needs more space in a data- or logfile you'll see more wait time because of file growth.

The indexes were rebuild every night on the important databases, and every weekend on less used databases. But they never checked if the problem they had before was fixed when switching to this solution.

Also the corruption check was run only on user databases. They never heard of running a corruption check on system databases. The system database were in the backup process, but they never took the time checked if they could restore them or were running a backup of a corrupted database.

 
Focus on the important stuff
So instead of automating all your processes, maybe you should focus on what’s really important. You just automated your backup process. But does it run every time? Are the backups actually written to disk? Can you restore one of the backups you created?

What I’m trying to say is, you can automate tasks whenever and wherever you like, but don’t forget to test them. Once you’ve automated something, plan regular tests to verify if the automated process runs the way you expect it to. And is the end result really the result you want and expect?

 
Don’t reinvent the wheel
Another tip is: don’t reinvent the wheel. There are more people that encountered the same issue, and wrote about it or logged about a solution. So before you build your own maintenance solution, or automate health status reports, check with your community members. There’s can be found help for every problem, but the checkup on that solution is all on you.

T-SQL Tuesday #45 – Follow the Yellow Brick Road

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Follow the Yellow Brick Road”. If you want to read the opening post, please click the image below to go to the party-starter: Mickey Stuewe (Blog | @SQLMickey).



 
When I read this months subject, I didn’t know what to write about, until I read one of the other blogs. So I’m sorry Mickey, but I’m using the subject this month as a guideline, and extend it a little bit. I’m glad I know Mickey, and I’m guessing she doesn’t mind me getting a little creative.

The writer of the post I read, talked about index fragmentation and data growth, and then I remembered a situation I encountered a while back.

When you come in to a company, you’d really like to see (and maybe even expect) to walk into a well organized team. A team that has procedures that work for them, and team members that agreed to solve problems in a specific way. But sometimes, it’s not…

 
Here you go!
On my first day as DBA at the company, I walked in and after a few conversations booted up my computer. Once it was booted, they gave me the name of a server, and wished me good luck. No documentation, no explanation of the databases, and no mention of who worked on the server. The only thing they said when they “handed over the key”, is “Here you go! From now on it’s your responsibility to keep the environment running and healthy!”.

So, there I was… No documentation, no health check, no one to tell me what the status of the server was…

 
Taking that first step is always the hardest
So the first thing I did was check out which databases were on the server. There were a lot: 70 in total. Then I checked the user permissions: most users were a member of the db_owner role. Great! Asking which users really needed all that permissions was useless. Everyone needed full access to everything.

So then I took the step to tell them that I was in charge of the server from that moment on, and that if they did something without my permission, I’d remove their access to the environment. Looking back, that was easier said than done.

But the first step that was really hard to accomplish, was reporting of the server health. No one thought that we needed that, and if there was something went wrong we would notice in an instant. And because I knew we really needed that, I just cleared my schedule, and created a few reports.

 
Being right is always a fun thing
A few weeks later, they finally saw I was right. One of the databases filled up a disk, and now we had issues. The only way we could see what went wrong, was looking at the data growth over the previous weeks in my reports. So now I could show them how wrong they were. It’s nice to be right once in a while!

 
Audit
This is were this months topic comes into play. A clear indication that you lack an audit trail, is when there’s an issue with one of your processes, databases, or even as small as a specific dataset, and you don’t know where the issue came from. Following your data flows from beginning to end is always a good idea. And if you work at a larger company, you’re (at least in the Netherlands) required to document your data flows for a financial audit every year.

But not only your datasets need audit trails. Your databases are in dying need of an audit trail as well. Because what happens if your database increases 50% in size over the course of a week, and you don’t track those kind of numbers? Your disk is filling up, your database stops working (if you haven’t got the space to extend your datafiles), and eventually your world is becoming a dark and evil place if you can’t fix it.

 
Conclusion
Even if you don’t have a full front-to-back audit trail of your data and processes, at least try to monitor and audit the key point in your data flow. That helps you debug excessive data growth, and helps you when (for example) a user creates a new database without asking your permissions. Even small audits help you out when you’re in a world of pain and trouble.

T-SQL Tuesday #44 – The second chance

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “The second chance”. If you want to read the opening post, please click the image below to go to the party-starter: Bradley Ball (Blog | @SQLBalls).



 
This months topic isn’t easy for me. Even while I’m writing this, I’m still thinking about what that means to me personally. Second chances sound to good to be true. Doing something for a second time, and trying a different approach then the first time, in order to succeed…

Normally I try not to be a person that looks back at previous mistakes, but sometimes that’s easier said than done. But I must say, there are not a lot of things I regret in my life. Only one that’s really close to my heart, and nobody probably knows about. But I won’t bother you with that story…

 
Deserving a second chance
People always say: everybody deserves a second chance. But I don’t completely agree with that. Everyone makes mistakes, and that’s not something to be ashamed of. But if you make the same mistake twice, you need to start wondering if there’s something you could have done to prevent it. But even then, you deserve a second chance.

The people that actually know me, know I’m a nice guy, I’m honest (sometimes even a little bit too honest), and normally you can’t get me angry. But if you screw the same things up over and over again, I’m not that friendly anymore. Let me explain that with an example.

 
No, that wasn’t me!
A while ago I worked with someone who thought he was really good at his job. Personally, I had some issues with that opinion, but I gave him the benefit of the doubt. On a number of occasions he screwed things up, and I thought he should have known that what he was doing was never ever going to work. But still, I was willing to give him that second chance. But then he got me angry. And you won’t like me when I’m angry.

There were a number of SQL Server Agent Jobs running, and they locked up some of our tables and databases. When I asked him to look at that, he said he didn’t run those jobs, and focused on his screen again. So I asked him again, nicely, to look at it. He replied with the same answer.

A little bit angry, I told him the jobs were started on the server, and that he was the only one that was logged on to the server. Then he “suddenly” remembered he started the jobs, and said the locking wasn’t that bad. As a DBA, I took a deep breath, and counted to 10, and waited for him to fix the issue. But if you’re that stubborn, you’re clearly lying to me, and don’t even have the courage to tell me you screwed up, you don’t deserve a second chance in my opinion. At least be honest with yourself and to your colleagues!

 
Honesty get’s you a second chance
At this and previous companies I worked for, I always tried to teach the student and interns they need to be honest and listen to people with experience. Even if things go wrong, and you’re the one to blame, at least tell the truth. For me, that’s the difference between fixing the issue together and moving on, or letting him take the fall all on his own. But this is also an experience I got handed down to me by my colleagues a few years back. This is what happened to me, as I remember it:

When I started my first job in IT, I was offered a job as SQL Server Consultant. That meant that I was responsible for data conversions from different systems to our core system. When I took the job, I had never written a query before. But by listening to colleagues and my mentor (a good friend of mine who worked for the same company), I made it into a development team about 1.5 years after I started my first job.

That meant I was able to access the production system (yes, that’s where the problems began!). These permission were given to me, so I could solve data related issues in production. Until the day they asked me to update 5 rows in production. I checked and double checked the T-SQL statement I wrote, asked a colleague to take a look at it, and then took a break away from my computer so I could totally focus on this task when I got back.

I sat down again, looked at the query one last time, and pressed F5… One minute passed… Two minutes passed… And then the query finished… 50.000 rows affected… I slightly panicked, and noticed I only selected the update, a half “WHERE” clause, and no “BEGIN TRAN”… My heart started racing, and I picked up the phone and asked the system administrator (a good friend of mine, who worked at a different location) if he could restore the last backup for me, because I screwed up. After some questions, and some explanations about my mistake, the last thing he said, before he hung up the phone in anger, was “Which backup? The one that didn’t ran for the last few weeks?”.

I didn’t know what to do. How could I ever fix this? Almost every record in the table was updated, and there was no way of knowing what he old values of the updated records were. So it took all my courage to pick up the phone, and ring the system administrator again. All I heard on the other side of the phone was his evil laughter. Before I could ask him what was going on, he told me: “I’m glad you were honest to me. But don’t worry, I’m restoring the backup that was taken an hour ago. There’s no data lost”.

At that moment, I didn’t know what to think or feel. At first I wanted to slap him silly, but a few minutes later I wanted to thank him for his wonderful help. He probably saved my ass, and he never told anyone except my mentor (who also was my direct manager back then, and also a good friend of us both). A few days later, the three of us talked about it face to face, and eventually all laughed about the situation.

 
A wise lesson
But if I learned anything from that situation, besides never running an update without transaction or “WHERE” clause, is to be honest. Even though you might think the company will fire you for the mistake you made, it’s always better to tell them then letting them find out themselves. And that’s what I try to tell the students, interns, and junior colleagues I work with. Be honest, and then you earn a second chance…

Follow

Get every new post delivered to your Inbox.

Join 50 other followers