Transactions Locks «Prev  Next»

Lesson 8 Avoiding deadlocks
Objective State how to avoid deadlocks in SQL Server

Avoiding Deadlocks in SQL Server

Deadlocks in SQL Server occur when two or more sessions block each other by holding locks on resources that the others need, creating a circular wait condition that cannot resolve itself. For example, Session A might hold an exclusive lock on Table1 while waiting for a lock on Table2, while Session B holds an exclusive lock on Table2 and waits for a lock on Table1 — neither transaction can proceed. SQL Server's deadlock monitor periodically checks for these cycles and, when detected, automatically chooses one of the sessions as the "victim," rolling back its transaction and raising error 1205 to allow the other transaction(s) to continue. The most common causes include poorly ordered data access patterns, long-running transactions, and missing or inadequate indexing that forces excessive locking.

What you are trying to prevent

A deadlock happens when two (or more) sessions each hold a lock the other session needs, creating a circular wait. SQL Server detects the cycle and chooses a deadlock victim to roll back so that the other session can continue. Avoiding deadlocks is mostly about reducing contention and making lock acquisition predictable; Extended Events is how you capture evidence and confirm the root cause after the fact. :contentReference[oaicite:0]{index=0}

High-impact guidelines that prevent most deadlocks

  1. Keep transactions short and focused. Do as much work as possible before BEGIN TRAN (validation, calculations, user input). Commit quickly. Long transactions hold locks longer and increase the overlap window where cycles form.
  2. Access objects in a consistent order. If every code path locks resources in the same sequence (tables, indexes, rows), you dramatically reduce the chance of circular waits. This includes “hidden” access paths such as triggers, cascading FKs, and indexed view maintenance.
  3. Use correct indexing so SQL Server locks fewer rows for less time. Missing/inefficient indexes can turn a narrow update into a scan, which increases the number of locks, lock duration, and the probability of escalation. In practice, deadlocks often disappear after you (a) make the predicates SARGable and (b) add/adjust a supporting index.
  4. Prefer set-based operations over cursor/row-by-row patterns. Row-by-row logic holds locks longer and repeatedly revisits the same structures. If you must loop, batch the work (for example, 500–5,000 rows per transaction) and commit between batches.
  5. Avoid “lock amplification” inside the transaction. Keep the transaction scope limited to the statements that must be atomic. Do not perform reporting queries, large reads, or non-essential lookups inside the same transaction that performs writes.
  6. Choose an isolation strategy that matches your workload. Deadlocks are commonly driven by reader/writer contention. If your workload is read-heavy, consider using row-versioning for reads (for example, READ_COMMITTED_SNAPSHOT at the database level or SNAPSHOT for specific transactions) so readers do not block writers and vice versa. (You still need good indexing and ordering—versioning reduces many deadlocks, but not all.)
  7. Use lock hints only when you can justify and test them. Hints can solve a specific problem, but they can also create new ones by forcing lock modes or ranges. If you do use them, document the rationale and keep the scope tight (example: UPDLOCK to enforce a “select-then-update” pattern).
  8. Design for safe retry in the application. Even well-designed systems can deadlock under peak concurrency. Ensure the application can retry a deadlock victim transaction (error 1205) with backoff, and ensure the operation is idempotent or safely repeatable.

Reduce pain when deadlocks still occur

  • Set “importance” with deadlock priority. You can bias which session becomes the victim so that low-priority work yields to critical work:
    SET DEADLOCK_PRIORITY LOW;  -- or NORMAL / HIGH / numeric (-10 to 10)
    :contentReference[oaicite:1]{index=1}
  • Use SET LOCK_TIMEOUT to limit blocking waits (not deadlocks). Lock timeout controls how long a statement waits on a lock (blocking). Deadlocks are detected and resolved by SQL Server independently; timeouts are a separate safety valve for long blocking chains.
    SET LOCK_TIMEOUT 5000; -- 5 seconds for the current session
    SELECT @@LOCK_TIMEOUT;

Capture and analyze deadlocks using Extended Events

SQL Server Profiler/SQL Trace is legacy for Database Engine troubleshooting; Microsoft recommends Extended Events for ongoing monitoring and diagnosis. :contentReference[oaicite:2]{index=2}

Option A: Use the built-in system_health session

Most installations have system_health running and it captures deadlock information that you can query from the .xel target. :contentReference[oaicite:3]{index=3}
;WITH Deadlocks AS
(
  SELECT CAST(event_data AS xml) AS event_xml
  FROM sys.fn_xe_file_target_read_file(N'system_health*.xel', NULL, NULL, NULL)
  WHERE object_name = N'xml_deadlock_report'
)
SELECT
  event_xml.value('(event/@timestamp)[1]', 'datetime2(3)') AS timestamp_utc,
  event_xml.query('(event/data/value/deadlock)[1]')       AS deadlock_graph
FROM Deadlocks
ORDER BY timestamp_utc DESC;
The returned XML is the deadlock graph. In SSMS you can click and view it (or save it) to see the exact statements, objects, indexes, and lock modes involved.

Option B: Create a dedicated deadlock session (recommended for targeted troubleshooting)

Use a dedicated session when you want clean evidence (separate from other health events), a longer retention window, or a specific file path:
CREATE EVENT SESSION [deadlocks_capture] ON SERVER
ADD EVENT sqlserver.xml_deadlock_report
ADD TARGET package0.event_file
(
  SET filename = N'C:\XE\deadlocks_capture.xel',
      max_file_size = 50,          -- MB
      max_rollover_files = 10
);
GO
ALTER EVENT SESSION [deadlocks_capture] ON SERVER STATE = START;
GO
Read the captured deadlocks:
;WITH X AS
(
  SELECT CAST(event_data AS xml) AS event_xml
  FROM sys.fn_xe_file_target_read_file(N'C:\XE\deadlocks_capture*.xel', NULL, NULL, NULL)
)
SELECT
  event_xml.value('(event/@timestamp)[1]', 'datetime2(3)') AS timestamp_utc,
  event_xml.query('(event/data/value/deadlock)[1]')       AS deadlock_graph
FROM X
ORDER BY timestamp_utc DESC;

Historical note: the legacy “Trace Wizard” screenshot

The image below shows the old SQL Server Profiler “Create Trace Wizard” workflow, which historically helped capture deadlock-related trace events. Profiler/SQL Trace is deprecated for Database Engine monitoring; use Extended Events (above) for modern deadlock capture and analysis. :contentReference[oaicite:4]{index=4}
Create Trace Wizard
Legacy Profiler Trace Wizard (deprecated): modern deadlock evidence is captured with Extended Events.

Summary checklist

To avoid deadlocks in production systems:
  1. Keep transactions short; remove user interaction from transaction scope.
  2. Lock objects in a consistent order across all code paths.
  3. Fix indexing and query plans to reduce scans, lock counts, and lock duration.
  4. Batch large modifications; prefer set-based operations.
  5. Consider row-versioning for read-heavy workloads.
  6. Implement retry for deadlock victims; optionally set DEADLOCK_PRIORITY for critical workloads.
  7. Use Extended Events (system_health or a dedicated session) to capture the deadlock graph and verify the fix.
The next lesson shows you how to create transactions.

SEMrush Software 8 SEMrush Banner 8