Practical SQL Server Query Optimization Tips

The Problem

Slow queries are one of the most common performance issues in enterprise web applications. After spending considerable time tuning SQL Server performance at work, here are the techniques that consistently deliver results.

The thing about query performance is that it's rarely a problem on day one. Your application launches, queries run fast against a few thousand rows, and everything feels snappy. Then six months later, tables have millions of rows, and users start complaining about pages that take 10 seconds to load. The same queries that worked fine are now bottlenecks.

Reading Execution Plans

Before you optimize anything, learn to read execution plans. This is the single most important skill for SQL Server performance tuning.

In SQL Server Management Studio, press Ctrl+M to include the actual execution plan, then run your query. The plan shows you exactly how SQL Server processes your query — which indexes it uses, where it does table scans, and where the most time is spent.

Key things to look for:

  • Table Scans and Clustered Index Scans — these read the entire table. If your table has millions of rows and you only need 10, this is a problem.
  • Key Lookups — SQL Server found your row using a nonclustered index but had to go back to the clustered index to get additional columns. A few of these are fine; thousands are not.
  • Sort operations with high cost — often indicates a missing index that could provide pre-sorted data.
  • Fat arrows — the thicker the arrow between operators, the more rows are flowing through. Look for places where the row count drops dramatically — that's where your filter is, and it should happen as early as possible.
-- Enable statistics to see I/O and time details
SET STATISTICS IO ON;
SET STATISTICS TIME ON;

-- Run your query here
SELECT o.OrderId, o.OrderDate, c.CustomerName
FROM Orders o
JOIN Customers c ON o.CustomerId = c.CustomerId
WHERE o.Status = 'Active' AND o.OrderDate > '2024-01-01';

SET STATISTICS IO OFF;
SET STATISTICS TIME OFF;

The STATISTICS IO output shows logical reads — how many 8KB pages SQL Server had to read. This is the number you want to minimize. A query doing 50,000 logical reads that could do 50 is a query worth optimizing.

Indexing Strategy

The single biggest impact comes from proper indexing. A few rules I follow:

  1. Always check the execution plan before and after changes
  2. Cover your WHERE clauses — if a query filters on Status and CreatedDate, a composite index on both columns is often better than two separate indexes
  3. Include columns to avoid key lookups — use INCLUDE in your index definition for columns that are selected but not filtered on
  4. Consider column order — in a composite index, put the most selective column first (the one that narrows down the most rows)
CREATE NONCLUSTERED INDEX IX_Orders_StatusDate
ON Orders (Status, CreatedDate)
INCLUDE (CustomerName, TotalAmount);

This index is called a covering index because it contains all the columns the query needs. SQL Server can satisfy the entire query from the index without touching the base table — zero key lookups.

When NOT to Add Indexes

Indexes aren't free. Each index:

  • Takes up disk space
  • Slows down INSERT, UPDATE, and DELETE operations (because every index needs to be updated)
  • Needs to be maintained (fragmentation over time)

For tables that are write-heavy and read-light, be cautious with indexing. A table that receives 10,000 inserts per minute but is only queried in a nightly report probably doesn't need five nonclustered indexes — it needs one or two that serve the report query.

Index Maintenance

Indexes fragment over time as data is inserted, updated, and deleted. Fragmentation above 30% can noticeably impact scan performance. Set up a maintenance plan:

-- Check fragmentation
SELECT
    OBJECT_NAME(ips.object_id) AS TableName,
    i.name AS IndexName,
    ips.avg_fragmentation_in_percent,
    ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10
    AND ips.page_count > 1000
ORDER BY ips.avg_fragmentation_in_percent DESC;

General rule: reorganize indexes with 10-30% fragmentation, rebuild indexes above 30%.

Dapper vs Entity Framework

I use both, depending on the situation:

  • Entity Framework Core for standard CRUD operations and when rapid development matters
  • Dapper for read-heavy reporting queries, bulk operations, and anywhere performance is critical

Dapper's overhead is minimal — it maps directly to your objects without the change tracking overhead of EF Core. In benchmarks, Dapper is very close to raw ADO.NET, while EF Core adds measurable overhead for large result sets.

// Dapper — fast and direct
var orders = await connection.QueryAsync<Order>(
    "SELECT OrderId, OrderDate, CustomerName, TotalAmount " +
    "FROM Orders WHERE Status = @Status AND CreatedDate > @Since",
    new { Status = "Active", Since = DateTime.UtcNow.AddDays(-30) });

When to Choose Which

Use EF Core when:

  • You need change tracking (editing an entity and calling SaveChanges)
  • You're building CRUD screens with standard create/read/update/delete operations
  • You want to use LINQ for type-safe query composition
  • You're working with complex object graphs (navigation properties, includes)

Use Dapper when:

  • You're reading data for display (no modifications needed)
  • You need to run a stored procedure that returns a specific result shape
  • Performance benchmarks show EF Core is a bottleneck for a specific query
  • You're doing bulk inserts or updates

In practice, I often use both in the same project. EF Core handles the standard CRUD through repositories, and Dapper handles the performance-critical queries, often backed by stored procedures:

public class OrderRepository
{
    private readonly AppDbContext _context;
    private readonly IDbConnection _connection;

    // EF Core for writes
    public async Task<Order> CreateAsync(Order order)
    {
        _context.Orders.Add(order);
        await _context.SaveChangesAsync();
        return order;
    }

    // Dapper for read-heavy queries
    public async Task<IEnumerable<OrderSummary>> GetDashboardSummaryAsync(DateTime since)
    {
        return await _connection.QueryAsync<OrderSummary>(
            "usp_GetOrderDashboard",
            new { Since = since },
            commandType: CommandType.StoredProcedure);
    }
}

Stored Procedures

For complex business logic that involves multiple tables or needs to run atomically, stored procedures still make sense. They also give you a clear performance baseline you can tune independently of application code.

Advantages of stored procedures:

  • Execution plans are cached — the first call compiles the plan, subsequent calls reuse it
  • Reduced network traffic — you send a procedure name and parameters, not a large SQL string
  • Security — you can grant execute permissions on procedures without giving direct table access
  • Encapsulation — database logic stays in the database, application code calls a clean interface

A pattern I use often is a stored procedure for complex reports that join multiple tables:

CREATE PROCEDURE usp_GetOrderDashboard
    @Since DATETIME,
    @Status NVARCHAR(20) = NULL
AS
BEGIN
    SET NOCOUNT ON;

    SELECT
        o.OrderId,
        o.OrderDate,
        c.CustomerName,
        o.TotalAmount,
        COUNT(oi.ItemId) AS ItemCount,
        s.ShipmentStatus,
        s.TrackingNumber
    FROM Orders o
    INNER JOIN Customers c ON o.CustomerId = c.CustomerId
    LEFT JOIN OrderItems oi ON o.OrderId = oi.OrderId
    LEFT JOIN Shipments s ON o.OrderId = s.OrderId
    WHERE o.OrderDate >= @Since
        AND (@Status IS NULL OR o.Status = @Status)
    GROUP BY o.OrderId, o.OrderDate, c.CustomerName,
             o.TotalAmount, s.ShipmentStatus, s.TrackingNumber
    ORDER BY o.OrderDate DESC;
END

Parameter Sniffing

One gotcha with stored procedures is parameter sniffing. SQL Server compiles the execution plan based on the parameter values from the first call. If those values are atypical (e.g., a status that matches 90% of rows), the cached plan might be terrible for subsequent calls with more selective parameters.

Solutions:

  • Use OPTION (RECOMPILE) for queries where parameter values vary widely
  • Use OPTIMIZE FOR UNKNOWN to get a plan based on average statistics
  • Use local variables to decouple the parameter from the plan
-- Option 1: Recompile every time (slight CPU cost, but always optimal plan)
SELECT * FROM Orders WHERE Status = @Status
OPTION (RECOMPILE);

-- Option 2: Use a local variable
DECLARE @LocalStatus NVARCHAR(20) = @Status;
SELECT * FROM Orders WHERE Status = @LocalStatus;

Common Anti-Patterns

N+1 Queries

This is the most common performance problem in applications using an ORM. You load a list of orders, then for each order, you load the customer — resulting in 1 query for orders + N queries for customers.

// Bad: N+1 queries
var orders = await _context.Orders.ToListAsync();
foreach (var order in orders)
{
    var customer = await _context.Customers.FindAsync(order.CustomerId);
    // ...
}

// Good: eager loading
var orders = await _context.Orders
    .Include(o => o.Customer)
    .ToListAsync();

Functions in WHERE Clauses

Wrapping a column in a function prevents index usage:

-- Bad: full table scan, index on OrderDate is ignored
SELECT * FROM Orders WHERE YEAR(OrderDate) = 2024;

-- Good: sargable, uses index
SELECT * FROM Orders
WHERE OrderDate >= '2024-01-01' AND OrderDate < '2025-01-01';

Implicit Conversions

If your column is VARCHAR but your parameter is NVARCHAR (which is what .NET sends by default), SQL Server has to convert every row to compare — effectively a full scan:

-- If OrderCode is VARCHAR(50), this forces implicit conversion:
-- WHERE OrderCode = N'ABC123'  -- NVARCHAR parameter

-- Fix: specify the type explicitly in Dapper
var result = await connection.QueryAsync<Order>(
    "SELECT * FROM Orders WHERE OrderCode = @Code",
    new { Code = new DbString { Value = "ABC123", IsAnsi = true, Length = 50 } });

Quick Wins

  • Avoid SELECT * — only fetch the columns you need. Fewer columns means fewer pages read and less memory used.
  • Use paginationOFFSET/FETCH instead of loading entire result sets. Users don't need 50,000 rows on a single page.
  • Parameterize everything — prevents SQL injection and allows plan reuse across different parameter values.
  • Watch for implicit conversions — mismatched types in WHERE clauses can kill index usage.
  • Use NOCOUNT ON in stored procedures — prevents SQL Server from sending row count messages, reducing network traffic.
  • Batch your updates — instead of updating one row at a time in a loop, use a single UPDATE with a JOIN or a table-valued parameter.

Monitoring

Set up regular monitoring with Query Store or Extended Events. The queries that are slow today might not be the same ones that are slow next month as data grows.

Query Store (available in SQL Server 2016+) is particularly useful because it tracks query performance over time:

-- Find the top 10 queries by total duration
SELECT TOP 10
    qt.query_sql_text,
    rs.avg_duration / 1000 AS avg_duration_ms,
    rs.count_executions,
    rs.avg_logical_io_reads
FROM sys.query_store_query_text qt
JOIN sys.query_store_query q ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
ORDER BY rs.avg_duration * rs.count_executions DESC;

This gives you a clear picture of where your database is spending its time — and where optimization will have the biggest impact.

Conclusion

SQL Server performance tuning is an ongoing process, not a one-time task. Start with execution plans, add indexes strategically, choose the right data access tool for each query, and monitor continuously. The combination of EF Core for convenience and Dapper for performance, backed by well-designed stored procedures and proper indexing, handles the vast majority of enterprise workloads efficiently.

← Back to all posts