Note: Most of the Article of this blog has taken from another reputated blogs,Websites so Author will not responsible for any Issue.

General SQL Server Performance Tuning Tips

When your transaction log grows large and you want a quick way to shrink it, try this option. Change the database recovery mode of the database you want to shrink from “full” to "simple," then truncate the log file by performing a full backup of the database, then switch back to the “full” recovery mode. By temporally changing from the Full recovery model to the Simple recovery mode, and then back, SQL Server will only keep the "active" portion of the log, which is very small. [7.0, 2000, 2005] Contributed by Tom Kitta. Updated 5-7-2007

*****
If you need to delete all the rows in a table, don't use DELETE to delete them, as the DELETE statement is a logged operation and can take a significant amount of time, especially if the table is large. To perform the same task much faster, use the TRUNCATE TABLE instead, which is not a logged operation. Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.
After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics. [6.5, 7.0, 2000, 2005] Updated 5-7-2007
*****
If you use TRUNCATE TABLE instead of DELETE to remove all of the rows of a table, TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE. Here's a generic script that will drop all existing Foreign Key constraints on a specific table:
CREATE TABLE dropping_constraints
(
cmd VARCHAR(8000)
)
INSERT INTO dropping_constraints
SELECT
'ALTER TABLE [' +
t2.Table_Name +
'] DROP CONSTRAINT ' +
t1.Constraint_Name
FROM
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS t1
INNER JOIN
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE t2
ON
t1.CONSTRAINT_NAME = t2.CONSTRAINT_NAME
WHERE t2.TABLE_NAME='your_tablename_goes_here'
DECLARE @stmt VARCHAR(8000)
DECLARE @rowcnt INT
SELECT TOP 1 @stmt=cmd FROM dropping_constraints
SET @rowcnt=@@ROWCOUNT
WHILE @rowcnt<>0
BEGIN
EXEC (@stmt)
SET @stmt = 'DELETE FROM dropping_constraints WHERE cmd ='+ QUOTENAME(@stmt,'''')
EXEC (@stmt)
SELECT TOP 1 @stmt=cmd FROM dropping_constraints
SET @rowcnt=@@ROWCOUNT
END
DROP TABLE dropping_constraints
The above code can also be extended to drop all FK constraints in the current database. To achieve this, just comment the WHERE clause. [7.0, 2000] Updated 5-7-2007
*****
Don't run a screensaver on your production SQL Servers, it can unnecessarily use CPU cycles that should be going to your application. The only exception to this is the "blank screen" screensaver, which is OK to use. [6.5, 7.0, 2000, 2005] Updated 5-7-2007
*****
Don't run SQL Server on the same physical server that you are running Terminal Services, or Citrix software. Both Terminal Services and Citrix are huge resource hogs, and will significantly affect the performance of SQL Server. Running the administrative version of Terminal Services on a SQL Server physical server, on the other hand, is OK, and a good idea from a convenience point of view. As is mentioned in other parts of this website, ideally, SQL Server should run on a dedicated physical server. But if you have to share a SQL Server with another application, make sure it is not Terminal Services or Citrix. [7.0, 2000, 2005] Updated 5-7-2007
*****
Use sp_who or sp_who2 (sp_who2 is not documented in the SQL Server Books Online, but offers more details than sp_who) to provide locking and performance-related information about current connections to SQL Server. Sometimes, when SQL Server is very busy, you can't use Enterprise Manager or Management Studio to view current connection activity via the GUI, but you can always use these two commands from Query Analyzer or Management Studio, even when SQL Server is very busy. [6.5, 7.0, 2000, 2005] Updated 5-7-2007
*****
SQL Server 7.0 and 2000 use its own internal thread scheduler (called the UMS) when running in either native thread mode or in fiber mode. By examining the UMS's Scheduler Queue Length, you can help determine whether or not that the CPU or CPUs on the server are presenting a bottleneck.
This is similar to checking the System Object: Processor Queue Length in Performance Monitor. If you are not familiar with this counter, what this counter tells you is how many threads are waiting to be executed on the server. Generally, if there are more than two threads waiting to be executed on a server, then that server can be assumed to have a CPU bottleneck.
The advantage of using the UMS's Schedule Queue Length over the System Object: Processor Queue Length is that it focuses strictly on SQL Server threads, not all of the threads running on a server.
To view what is going on inside the UMS, you can run the following undocumented command:
DBCC SQLPERF(UMSSTATS)
For every CPU in your server, you will get Scheduler. Each Scheduler will be identified with a number, starting with 0. So if you have four CPUs in your server, there will be four Schedulers listed after running the above command, Schedulers ID 0 through 3.
The "num users" tells you the number of SQL threads there are for a specific scheduler.
The "num runnable," or better known as the "Scheduler Queue Length," is the key indicator to watch. Generally, this number will be 0, which indicates that there are no SQL Server threads waiting to run. If this number is 2 or more, this indicates a possible CPU bottleneck on the server. Keep in mind that the values presented by this command are point data, which means that the values are only accurate for the split second when they were captured, and will be always changing. But if you run this command when the server is very busy, the results should be indicative of what is going on at that time. You may want to run this command multiple time to see what is going on over time.
The "num workers" refers to the actual number of worker threads there are in the thread pool.
The "idle workers" refers to the number of idle worker threads.
The "cntxt switches" refers to the number of context switches between runnable threads.
The "cntxt switches(idle)" refers to the number of context switches to "idle" threads.

Denormalization in SQL Server for Fun and Profit

Almost from birth, database developers are taught that their databases must be normalized.  In many shops, failing to fully normalize can result in anything from public ridicule to exile to the company’s Siberian office.  Rarely discussed are the significant benefits that can accrue from intentionally denormalizing portions of a database schema.  Myths about denormalization abound, such as:

  • A normalized schema is always more stable and maintainable than a denormalized one.
  • The only benefit of denormalization is increased performance.
  • The performance increases from denormalization aren’t worth the drawbacks.
This article will address the first two points (I’ll tackle the final point in the second part of this series).  Other than for increased performance, when might you want to intentionally denormalize your structure?  A primary reason is to “future-proof” your application from changes in business logic that would force significant schema modifications.
Let’s look at a simple example.  You’re designing a database for a pizza store.  Each customer’s order contains one or more pizzas, and each order is assigned to a delivery driver.  In normal form, your schema looks like:
Table: Orders
Customer
Driver
Amount

Table: OrderItems
Order
Pizza Type
Planning Ahead.  Let’s say you’ve heard the owner is considering a new delivery model.  To increase customer satisfaction, every pizza will be boxed and sent for delivery the moment it comes out of the oven- even if other pizzas in the order are still baking. 
Since you’re a savvy developer, you plan for this and denormalize your data structure.  Thoughtoday, the driver column is functionally dependent only on the order itself, you cross your fingers, take a deep breath, and violate Second Normal Form by placing it in the OrderItems table.  There—you’ve just future-proofed your application.   Orders can now have multiple drivers. 
Your denormalization has introduced a small update anomaly (if an order’s driver changes, you have to update multiple rows, rather than just one) but if the probability of the delivery model change is large, this is well worth the cost.   This is typical when denormalizing, but usually it’s a small problem, and one that can be handled automatically via triggers, constraints, or other means..   For instance, in this case, you can create (or modify the existing) update SP for Orders to cascade the change into OrderItems.  Alternatively, you can create an UPDATE trigger on OrderItems that ensures all rows within one order have the same driver.  When the rule changes in the future, just remove the trigger—no need to update your tables or any queries that reference them.
Now let’s consider a slightly more complex (and somewhat more realistic) case.   Imagine an application to manage student and teacher assignments for an elementary school.    A sample schema might be:

Table: Teachers
Teacher (PK)
Classroom

Table: Students
Student (PK)
Teacher (FK)

Planning Ahead.  You happen to know that other elementary schools in the region are assigning secondary teachers to some classrooms.  You decide to support this in advance within your schema.  How would you do it via denormalization?  The ugly “repeating groups” solution of adding a “Teacher2” column is one solution, but not one that should appeal to you.    Far better to make the classroom itself the primary key, and move teachers to a child table:
Table: Classrooms
Classroom  (PK)
Teacher  (FK)

Table: Teachers
Teacher  (PK)
Classroom  (FK)

Table: Students
Student  (PK)
Classroom  (FK)

As before, this denormalization creates a problem we need to address.  In the future, the school may support multiple teachers in one classroom, but today that’s an error.   You solve that by the simple expedient of adding a unique constraint on the classroom FK in the teacher’s table.    When the business rule changes in the future, you simply remove the constraint.   Voila!  A far better solution than having to significantly alter your views, queries, and stored procs to conform to a new schema.