| Lesson 6 |
Record locking |
| Objective |
Use of optimistic and pessimistic locking. |
Record Locking in SQL Server
Why locking exists
SQL Server is built for concurrency: many sessions can read and change data at the same time. Locking is the core mechanism that prevents
inconsistent results (lost updates, dirty reads, double-processing work items, and other concurrency anomalies).
You typically do not “turn locking on.” SQL Server acquires and releases locks automatically based on the statement, the isolation level, and the
engine’s internal rules (including lock escalation). Your job as a developer or DBA is to choose the right concurrency model for the workload:
pessimistic (block to prevent conflicts) or optimistic (allow concurrency and detect conflicts).
Lock modes you will see in real systems
SQL Server uses many internal lock types, but the following categories are the most important to recognize:
- Shared (S): used for reads (for example,
SELECT) under locking-based read behavior.
- Update (U): a transitional lock used to reduce deadlocks in “read-then-update” patterns.
- Exclusive (X): used for writes (
INSERT, UPDATE, DELETE).
- Intent (IS/IU/IX): supports the locking hierarchy (row/key → page → table), allowing the engine to coordinate multi-granularity locks.
- Schema (Sch-S/Sch-M): protects metadata during queries and schema changes.
- Bulk Update (BU): used for specific bulk operations and load patterns.
Locking can occur at different granularities (key/row, page, table). SQL Server may also escalate locks (for example, many row locks → a table lock)
when it determines escalation is more efficient than tracking thousands of fine-grained locks.
Pessimistic locking
Pessimistic locking is “block first, then proceed.” The engine protects data by holding locks long enough to prevent other sessions
from making conflicting changes.
This approach is usually the right choice when:
- Collisions are common (many sessions update the same rows).
- The business operation must be serialized (for example, “only one session can claim the next work item”).
- You must guarantee that a row you read cannot change before you finish the unit of work.
Typical pessimistic patterns
-
Use the appropriate isolation level for the business requirement.
Higher isolation can reduce anomalies, but it increases blocking risk and can decrease throughput.
-
Use targeted lock hints only when you have a specific, measured reason.
For example, a classic queue/claim pattern may intentionally take update locks while reading:
BEGIN TRAN;
-- Example: claim a single available work item without racing other workers
SELECT TOP (1) *
FROM dbo.WorkQueue WITH (UPDLOCK, READPAST, ROWLOCK)
WHERE Status = 'READY'
ORDER BY CreatedAt;
-- Application then updates the claimed row(s) inside the same transaction
-- UPDATE dbo.WorkQueue SET Status='IN_PROGRESS', ...
COMMIT TRAN;
-
Use application locks when the resource is not a row (or when you need a logical lock).
This is a standard pattern for “one-at-a-time” business operations.
Optimistic locking
Optimistic locking is “allow concurrency, then detect conflicts.” This model assumes most transactions will not collide.
It generally improves concurrency by reducing blocking, but it requires you to handle conflicts correctly.
Two common optimistic approaches in SQL Server
-
Row versioning for reads (RCSI / SNAPSHOT)
Instead of readers taking locks that block writers, readers access versioned rows. This is a major concurrency lever in modern SQL Server
deployments, especially for read-heavy systems.
-- Database-level (execute once, typically during a maintenance window)
ALTER DATABASE YourDatabase SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE YourDatabase SET ALLOW_SNAPSHOT_ISOLATION ON;
-- Session-level (when you need transaction-level versioning)
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRAN;
-- consistent reads as-of the start of the transaction
COMMIT TRAN;
-
Update conflict detection with a
rowversion column
Add a rowversion column and require the caller to present the original value during the update. If someone else changed the row,
your update affects 0 rows, and you detect a concurrency conflict.
-- Table design (one time)
ALTER TABLE dbo.Customer ADD RowVer rowversion;
-- Update with optimistic check (pattern)
UPDATE dbo.Customer
SET Email = @NewEmail
WHERE CustomerId = @CustomerId
AND RowVer = @OriginalRowVer;
IF @@ROWCOUNT = 0
BEGIN
-- Concurrency conflict: someone else changed the row
THROW 50001, 'Concurrency conflict: row changed by another transaction.', 1;
END
When optimistic locking is a strong default
- Read-heavy workloads that suffer from blocking under traditional locking.
- Workloads where conflicts are rare, but latency matters.
- Systems where the application can safely retry or surface a “record changed” message.
Choosing between optimistic and pessimistic
Use this decision frame:
- Prefer optimistic when you want high concurrency and can handle conflicts (retry, merge, or user prompt).
- Prefer pessimistic when the business rule requires serialization or conflicts are frequent and expensive.
In practice, most real systems are hybrid:
you use row versioning to reduce reader/writer blocking, and you use targeted pessimistic locking for short, critical “claim/transfer” operations.
Modern note for SQL Server 2025
SQL Server continues to invest in reducing lock overhead and improving concurrency. In SQL Server 2025, features such as
optimized locking and best-practice guidance around Read Committed Snapshot (RCSI) reinforce a modern default:
reduce blocking for readers while keeping updates correct and observable.
The key takeaway is practical: start with clean transaction design (short transactions, proper indexes, and clear access patterns), then
choose pessimistic vs. optimistic controls where the workload proves it needs them.
Operational checklist
When locking becomes a production issue, focus on observation before “fixes”:
- Identify the blockers and lock types (DMVs such as
sys.dm_tran_locks).
- Measure wait types (blocking, deadlocks, latch waits) and transaction duration.
- Reduce transaction scope: keep user interaction outside transactions; commit quickly.
- Verify indexing so statements touch fewer rows (fewer locks, fewer escalations).
- Use lock hints sparingly and only when you can justify them with a reproducible concurrency problem.
The next lesson introduces you to issues relating to locking, called deadlocks.
