Saturday, January 28, 2017

Reading an Execution Plan - Part 1

Execution plan is something similar to the Bible, everyone know it is there but very few reads and understand what is in there. We here lot of people saying "Why this query is slow" but if you ask them "Have you look at the execution plan?" you rarely get the answer "Yes".
So first of all what is Execution plan,  in simple terms, it is the result of the query optimizer’s attempt to calculate the most efficient way to implement the request represented by the T-SQL query you submitted. In other words it says how the query was or will be executed in side SQL Server.So when you know how to read it you can find the places which are making the query slow.

Execution Plan Types

When you look at a execution plan of a particular query SQL Server presents two types of plans;

Estimated Execution Plan

When estimated execution plans are generated, the Transact-SQL queries or batches do not execute. Instead, the execution plan that is generated displays the query execution plan that SQL Server Database Engine would most probably use if the queries were actually executed.

Actual Execution Plan

When actual execution plans are generated, the Transact-SQL queries or batches execute. The execution plan that is generated displays the actual query execution plan that the SQL Server Database Engine uses to execute the queries.

Execution Plan Formats

In SQL Server execution plan can be viewed in three different formats;
  • Graphical Plan
  • Text Plan
  • XML Plan

Graphical Plan

These are quick and easy to read but the detailed data for the plan is masked. Both Estimated and Actual execution plans can be viewed in graphical format.

Text Plans

These are a bit harder to read, but more information is immediately available. There are three text plan formats:
  • SHOWPLAN_ALL : a reasonably complete set of data showing the Estimated execution plan for the query
  • SHOWPLAN_TEXT : provides a very limited set of data for use with tools like osql.exe. It too only shows the Estimated execution plan
  • STATISTICS PROFILE: similar to SHOWPLAN_ALLexcept it represents the data for the Actual execution plan

XML Plans

XML plans present the most complete set of data available on a plan, all on display in the structured XML format. There are two varieties of XML plan:
  • SHOWPLAN_XML : The plan generated by the optimizer prior to execution.
  • STATISTICS_XML : The XML format of the Actual execution plan.
Now let's look at a query and how  it's execution plan looks like. I am using the AdventureWorks sample database for the examples.
Here I am writing a full select on person table. And you can see the Actual Execution Plan below in which it does a full Clustered Index Scan. This is because SQL Server has to return all the values in the table using the clustered index.
Let's write a query which will do a restrict (WHERE filter) to the primary key, now you can see it is using a Seek to the clustered index to get the value.
Scan operations are very costly and Seek operations are performance friends since they use Indexes to get information. Simply a Clustered index scan means a full table scan which is the most costly operation to retrieve data.
Now let's look at how a non-clustered index being used is shown in an execution plan. Below query is asking for the people who has PersonType attribute is set to 'EM'. In the table PersonType column does not have an index so the Query Optimizer has no any other option but to go for a full clustered index scan to get the results.
Now let's ask SQL Server to return people who has first name Alan. Person table has a composite index for FirstName, MiddleName and LastName. As you can see it uses a index scan to on that composite index first and then do a Key_Lookup on the clustered index to present the data.
Let's ask SQL Server to get the people who has name Dylan A Millar. In this scenario it uses an Index Seek on the composite index. Which is fully utilizing the index for optimal performance. In simple terms Index Seek is the most optimal way to access the data but in the previous scenario it had to scan through the whole index since the first name didn't have a  proper index.
In conclusion you must look at the execution plan when optimizing a bad performing query. If a Index is missing you can simply identify it by looking at the execution plan as explained above. We will be discussing more about reading different type of execution plans in the next chapter of this article series.

Sunday, January 1, 2017

Be Conscious on Table Variables and Temp Tables

In SQL Server when you want to store table of data for later usage either a Temp Tables or a Table Variable will come in to play. But are you really conscious about making the decision? about what is the right fit for your situation. Most of the time when I ask someone why it is a table variable used here, people mostly have no reason. So in this article I am going to describe the contrasting usage of the two type of temporary storage in SQL Server.

Table Variables

Table variables are a variable declared as a table. They are stored in both memory and in the disk. They can only be accessed within the batch and scope in which they are declared.  Table variables cannot be dropped explicitly. Once the batch execution is finished, the Table Variables are dropped automatically.
When the Table variable is declared it’s structure can not be modified later. In simple terms DDL commands can not be run on Table variables.
Below is the syntax of creating a table variable;
Name VARCHAR(100),

Temporary Tables

Mainly there are two types of Temporary Tables. Namely Local Temporary Tables and Global Temporary Tables.
  • Local Temporary Table: These temporary tables are valid only to the session it was created. After the session terminates these are removed from the temp db. However unlike table variables Local Temporary Tables can be deleted explicitly.
  • Global Temporary Table: These tables are available for all the sessions and users. They are not deleted until the last session using them is terminated. Similar to local Temporary Table, a user can delete them explicitly.
Both the above two types of temporary tables are stored in side the temp DB. Unlike table variables, structure of a temporary table can be changed after it’s creation.
Below is the syntax of creating a temporary table;
CREATE TABLE #Customer (Id INT,Name VARCHAR(100),DateJoined DATETIME);

When to use table variables
Table variables are best suit for below scenarios;
  1.  Generally if the data you are going to load in to the table is only few rows table variables will be a better choice.
  2. If you need the data in the table to survive a rollback of an outer user transaction then use a table variable. This is because table variables does not belong to a transaction.
  3. If the best query plan for the query using the table will not every change then you should use a table variable. By using a table variable in this scenario will skip the overhead of statistic creation and recompilation.
  4. If the table data is populated from a potentially expensive SELECT statement then using a table variable will block the possibility of this using a parallel plan.
 When to use Temporary Tables
  1. If you are going to load thousands of rows to the table then temporary table is a better choice.
  2. If you want to change your table structure after creating the table you have no other option but to use a temporary table.
  3. If you need an index that cannot be created implicitly through a UNIQUE or PRIMARY KEYconstraint then you need a #temporary table as it is not possible to create these on table variables.
  4. If you will be repeatedly adding and deleting large numbers of rows from the table then use a #temporary table. That supports TRUNCATE (which is more efficient than DELETE for large tables).
The Temporary Table and Table Variables can be both strong and weak depending on the scenario presented. We hope this article will give you enough information for you to make a correct conscious decision.

Thursday, January 29, 2015

InnoDB or MyISAM, That's the question

Currently MySQL Supports two storage engines InnoDB (default for MySQL 5.5+) and MyISAM. When you are creating a table you may be prompted with the question which one I should choose and why? This article is an attempt to shed some light to clear your thoughts make the correct decision.

Before go any further let's see what a Storage Engine actually does. In simple terms it is the most integral part of any database engine. If we put it to a definition  "Storage engine is what stores, handles, and retrieves information from a table". The marvel of MySQL we could have different storage engines for different tables. Even though there are plenty of storage engines supported by MySQL, this article will be focused on InnoDB and MyISAM. Which are the most common and frequently used storage engines.


The MyISAM type is actually a branch, or child, of the ISAM engine. If you are ok with not having TRANSACTION, and with having table-level locking as opposed to row-level locking, MyISAM is typically the best for performance and functionality. The MyISAM type is great for sites that have a very low INSERT/UPDATE rate and a very high SELECT rate. Other options might want to be considered with very high INSERT/UPDATE queries due to the restrictions of table-level locking causing a decrease in performance.

The max number of rows supported for MyISAM is ~4.29E+09 with up to 64 indexs per table. It is also good to note that fields of TEXT/BLOB can be fully indexed for cases such as searching.


Compared to MyISAM, InnoDB provides many more feature to increase performance. There is some additional time spent during initial setup, as opposed to MyISAM, but the benefits far outweigh the time spent. One major difference is the ability to do row-level locking, as opposed to table-level locking, to increase performance time. This allows parallel INSERT/UPDATE/DELETE queries to be ran on the same table, unlike MyISAM where each query has to wait its turn to run.

Additionally, InnoDB provides foreign key functionality. This allows you to ensure that dependent data in table one is present before inserting data into table two. Likewise, it prevents data in table one from being deleted if there is data in table two that depends on it.

InnoDB also provides caching for data and indexes in memory, as well as on disk, which provides a large increase in performance gain. For those low on RAM this might not be the ideal solution, but with memory as cheap as it is this is becoming less of an issue.


In conclusion, while there is no perfect catchall storage engine, it can be fairly safe to say that InnoDB and MyISAM are the typical go-to for many applications and DBAs. It is good to note, however, that though they are a good catchall, they may not be ideal for every situation and there may be another storage engine available that will increase performance in your application/environment.

Friday, January 25, 2013

Impersonating a Windows Login in SQL Server

In our day to day life as DBA's  most common issues we get are related to security. Specially some accounts not having required access to do certain things. This can be real messy if you don't have a well defined security architecture in place. From SQL Server 2005 on wards Microsoft has introduced Role Based security which has reduced the effort of DBA's remarkably. However if you mess up the Security Roles it will be a nightmare to fix a security issue.

So in this article I am providing a simple script which can be really handy when you want to check how a DB object behaves when its been called using a specific windows login in SQL Server.

SETUSER 'DomainName\UserName'

-- Here Goes Your SP Call or any other Query you want to check, this query will run using the credentials of the above user and report any errors that user may encounter

Always happy to answer any queries via comments.

Thursday, January 24, 2013

FIX : CLR .NET Assemblies security error after Database Restore

Recently I had to migrate a massive database from one hosting provider to another (Awesome Amazon Web Services). After backup and restore I noticed all the Stored Procedures calling CLR assemblies fails with a security error. 

"Msg 10314, Level 16, State 11, Procedure xxx_xxxxxx, Line 285
An error occurred in the Microsoft .NET Framework while trying to load assembly id 65627. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error:
System.IO.FileLoadException: Could not load file or assembly" 

This was really strange as it was a direct restore and all the assemblies resides in side the DB itself. So here is what you have to do to fix this issue;

1. Query the table sys.assembly_files and sys.assemblies make sure your assemblies are present in the restored DB. If not you have to re register the assemblies.
2.  Make sure your restored DB has a owner (Ideally sa)
3. Make sure the login who owned the db in the previous server is created in the new db server, if it is a SQL Server login the new login should have the same SID as the previous servers login, Windows logins just having the same name would sufficient.
4. Then run the below command to make your DB trustworthy.

SET TRUSTWORTHY ON  --after restoration database has this option equal to off

This should simply solve your issue. Please share with me if you find any trouble doing this workaround.

Tuesday, September 13, 2011

Round UP with CEILING

Rounding up in TSQL is bit tricky but if you want just to round up a decimal value to the nearest integer then you can use tsql Math function CEILING function to achieve this easily.

CEILING function return the next integer that is larger than the number entered.

SELECT CEILING(43.98) -- Returns 44
SELECT CEILING(43.18) -- Returns 44
SELECT CEILING(0.18) -- Returns 1
SELECT CEILING(-0.18) -- Returns 0
SELECT CEILING(-43.18) -- Returns -43

MSDN reference :