Wednesday, March 9, 2011

Parameter sniffing has been one of the oldest blockers in query optimisation.If understood correctly it can be quite easy to resolve it.

Lets create a table of about 10000 records and name it as LargeTable.Here is the script for creating the same.

CREATE TABLE LargeTable (
ID INT,
SomeInt int,
RandomDate DATETIME
)
GO

;WITH DataPopulate (RowNo,Strng,ADate) AS (
SELECT 1 AS RowNo, 5 as Strng,DATEADD(dd, FLOOR(RAND()*75454),'1753/01/01')
UNION ALL
SELECT rowNo+1, CASE when rowno%10=0 then 5 else 89 end as Strng,DATEADD(dd, FLOOR(RAND(RowNo*96322)*85454),'1753/01/01')
FROM DataPopulate WHERE RowNo<10000

INSERT INTO LargeTable
SELECT * FROM DataPopulate
OPTION (MAXRECURSION 10000)
GO

CREATE CLUSTERED INDEX idx_Large1 on LargeTable (ID)
GO


Now let us create a SP to return no of records from the table.


CREATE PROCEDURE TestSniffing @int int AS
SELECT * from largetable where someint = @int
GO

Let us execute the SP :-

Exec TestSniffing 5



Look at the red part highlighted in the screen shot(Click on the image for better viewing).The Estimate Number of rows and the Actual Number of rows are matching.So far so good.

Now let us change the parameter value to 89 since 89 is another value which has a large number of records in the table.

Exec Testsniffing 89

Now refer to the highlighted red part of the following screenshot



The Estimated No of rows and the Actual No of rows differ by 8000.
Well why did this happen ? The answer is simple.Because of parameter sniffing which can cause a sub optimal query plan.

Now how should this be resolved ?Most of the solutions available tend to state to use pass a value of the parameter to a SP level variable and then pass that variable as a predicate to the actual query in the SP.Let us examine whether it solves the problem.

First let us alter the SP to fit our above requirement.

ALTER PROCEDURE TestSniffing @int int AS
Declare @varint int
Set @varint=@int
SELECT * from largetable where someint = @varint
GO

Now let us execute the SP with values 5 and 89 and examine the graphical execution plan

Exec TestSniffing 5



Again pay attention to the red part marked in the screen shot.The difference between the Estimated and the actual No of rows has decreased but is not exactly the same.

Let us try changing the parameter value to 89

Exec TestSniffing 89

Again there seem to be a huge difference between the actual and the estimated no of rows.



So what should be the solution?

Well one can execute the SP using With Recompile option.Something like this

Exec TestSniffing 89 With Recompile

But before doing that you need to alter the SP to its original state

ALTER PROCEDURE TestSniffing @int int AS
SELECT * from largetable where someint = @int
GO

The query plan generated will have a exact match for both the Estimated and the Actual No of rows.



Let us try for value of 5

Exec TestSniffing 5 With Recompile.



It gives a exact match between the estimated and the actual no of rows.

Adding the "With recompile" option seem to be the only way that to create a exact count between estimated and actual row count.Even updating the statistics did not seem to help in my example.

Saturday, January 22, 2011

Today I am going to post why you should NOT shrink log files periodically.Let us create a Test database.Let us name it as "Test"

Create Database Test
Go


Now get to get the logspace used by our newly created database you can use the following DBCC command.

DBCC SQLPERF(logspace)

In our case the logspace size for the newly created database is 0.484375MB.




Now let us create a single table say T with a single column id and insert about a million records in it and then run the DBCC SQLPERF(logspace) to get the log size.

As you can see the log size has now increased by 300 percent.



The following query will give the logical name of the logfile for the database we just created.

Use Test
Go

select name from sys.sysfiles

In our case it is Test_log.

Now let us shrink the log file.We will use the DBCC SHRINKFILE command.

DBCC SHRINKFILE(Test_log)

The o/p of the above command shrinks the log file to the approximately the original size as when the database was created.



Now let us try to delete the records from the the only table in the Database.This will cause lot of log records written to the log file.

Delete from T where id>0

again let us check the log file size.

Wow log file has now bloated to almost 460 percent of the original size.


So the conclusion from all the above would be that log file will increase no matter how many times you shrink it.

So always have a good backup strategy to keep the log file size in check.Log shrinking should only be done in very very emergency situations only.