Lesson 12
SQL Server Transactions Locks Module Summary
This module established a practical baseline for building reliable business rules in SQL Server. Transactions protect consistency by grouping changes into an all-or-nothing unit of work, but transactions also introduce concurrency side effects—most notably locks, blocking, and deadlocks.
By the end of the module, you should be able to:
- Identify and define transactions and when explicit control is required
- Understand transaction modes and how SQL Server starts and ends units of work
- Select isolation levels and predict read/write behavior under concurrency
- Work safely with nested transaction patterns (and avoid false assumptions)
- Explain pessimistic vs. optimistic concurrency and the tradeoffs of row versioning
- Recognize, troubleshoot, and reduce deadlocks and transaction blocking
- Create local and distributed transactions and understand the operational implications
- Handle errors deterministically so failures roll back cleanly and release locks
Transactions: Definition and Purpose
A transaction is a unit of work that must either fully succeed or fully fail. In SQL Server terms, that typically means the statements between BEGIN TRAN and COMMIT are treated as one logical operation. When the transaction rolls back, SQL Server undoes data modifications and frees resources held by the transaction.
Transaction Modes: How Work Becomes “Transactional”
SQL Server can execute work under different patterns:
-
Autocommit behavior: many individual statements commit automatically when they succeed (common in ad hoc sessions).
-
Explicit transactions: you control scope using
BEGIN TRAN, COMMIT, and ROLLBACK. This is the
typical choice for multi-statement business rules.
-
Nested “layers”: SQL Server can increment
@@TRANCOUNT with additional BEGIN TRAN calls, but it does not behave
like independently commit/rollback-able nested transactions. (This is addressed again below.)
Isolation Levels: Concurrency Rules for Reads and Writes
Isolation levels exist to control how concurrent transactions interact—specifically which anomalies you permit (dirty reads, non-repeatable reads, phantom
rows) and what tradeoffs you accept (blocking vs. tempdb versioning overhead).
SQL Server supports the classic isolation levels plus snapshot-based approaches:
-
READ UNCOMMITTED: may read uncommitted changes (dirty reads). Useful mainly for troubleshooting scenarios, not for business rules.
-
READ COMMITTED: default behavior in many systems. Readers generally avoid dirty reads, but blocking can occur when writers hold locks.
-
REPEATABLE READ: prevents changes to rows read by the transaction, but does not fully prevent phantoms.
-
SERIALIZABLE: strongest locking-based isolation. Prevents phantoms by taking additional range/key-range locks; increases blocking risk.
-
SNAPSHOT (row versioning): provides a consistent view of data as of the transaction start, typically reducing reader/writer blocking at
the cost of storing row versions in tempdb.
A major modern design choice is whether to keep a locking-based read model (pessimistic reads) or to enable row versioning (optimistic reads) using
snapshot-based options, depending on workload and latency expectations.
Nested Transactions: What SQL Server Really Supports
SQL Server supports the concept of nested transaction counts (@@TRANCOUNT), but not independently manageable nested transactions.
A COMMIT of an “inner” transaction typically only decrements @@TRANCOUNT, while a ROLLBACK without a savepoint rolls
back the entire unit of work.
When you need partial rollback inside a larger operation, use a savepoint:
SAVE TRANSACTION BeforeStep2;
-- Step 2 work here...
-- Roll back only to the savepoint (not the beginning)
ROLLBACK TRANSACTION BeforeStep2;
Locking Models: Pessimistic vs Optimistic Concurrency
Locking is the mechanism that protects consistency when multiple sessions access the same data. Conceptually, this module contrasted:
-
Pessimistic locking: assume conflicts are likely, so readers/writers use locks to prevent interference. This can create blocking under load.
-
Optimistic locking (row versioning): assume conflicts are less common, so readers typically avoid blocking and instead read older committed
versions when writers are active. This reduces blocking but shifts cost into tempdb and version store management.
Understanding Exclusive Locks
Exclusive locks protect resources being changed. While an exclusive lock is held, other sessions cannot take conflicting locks on the same resource.
This prevents concurrent updates/deletes on the same row/page/key from producing inconsistent results.
Understanding Update Locks
Update locks are designed to reduce a common deadlock pattern during “read-then-update” operations. An UPDATE often has two logical phases:
-
Search phase: SQL Server locates qualifying rows (often requiring a lock compatible with readers).
-
Modification phase: SQL Server converts to an exclusive lock to apply the physical update.
Without update locks, two sessions can both read in a compatible mode and then both attempt to convert to exclusive locks, producing a conversion deadlock.
Update locks act as a “reservation” to make that conversion pattern safer by reducing competing conversions on the same resource.
Blocking and Deadlocks: Detection and Prevention
Blocking occurs when one session holds locks that another session needs. Deadlocks are a special case where two (or more) sessions form a cycle of
dependencies, and SQL Server selects a victim to break the cycle.
In modern environments, a practical approach is:
-
Keep transactions short: do less work inside the transaction, and avoid user interaction in the middle of transactional logic.
-
Access objects in a consistent order: if two procedures update tables A then B, both should follow the same order.
-
Index for the access path: reduce lock footprint by avoiding large scans when a selective seek is possible.
-
Choose isolation intentionally: consider snapshot-based strategies for read-heavy workloads that suffer from reader/writer blocking.
-
Implement retry for deadlock victims: deadlocks can occur even in well-designed systems; retry logic is often a normal requirement.
BEGIN, COMMIT, ROLLBACK: Operational Discipline
Your transactional code must always end in a known state:
- If work succeeds, commit once at the end.
- If any step fails, roll back and exit without leaving an open transaction.
- When using savepoints, roll back to the savepoint only when partial rollback is intended and safe.
Distributed Transactions: When “One Database” Is Not Enough
Distributed transactions coordinate a single logical unit of work across multiple resource managers (for example, multiple SQL Server instances or databases).
This often relies on a two-phase commit model and adds operational complexity: additional logging, network dependencies, and recovery coordination.
In practice, distributed transactions are typically reserved for cases where true atomicity across systems is required. In many architectures, teams instead
prefer designs that reduce cross-system atomic requirements (for example, idempotent operations, compensating actions, or transactional outbox patterns),
especially in cloud-native systems.
Error Handling and Testing: Make Failure Predictable
Robust transaction design is incomplete without deterministic error handling. Modern T-SQL implementations typically use:
TRY...CATCH, XACT_STATE(), and THROW. Many teams also enable SET XACT_ABORT ON inside stored procedures
to make many runtime errors automatically abort and roll back the transaction.
The exercise you completed in this module reinforces an important habit: always validate that a failed transaction rolls back and releases locks,
and that error details returned to the caller are actionable.
Glossary Terms
This module introduced the following terms:
-
ACID: Atomicity, Consistency, Isolation, Durability—properties used to describe correctness expectations for transactional systems.
-
Distributed transaction: A transaction coordinated across multiple resource managers (for example, multiple SQL Server instances/databases).
-
Deadlock: A circular dependency where two or more sessions each wait on resources held by the others, preventing progress until SQL Server selects
a victim.
-
Isolation level: A concurrency contract that controls what a transaction can read and what other transactions can change while it runs (tradeoff:
consistency vs blocking/versioning cost).
-
Nested transaction: A pattern where
@@TRANCOUNT increases inside an existing transaction. In SQL Server, “inner commits” do not
independently finalize work; savepoints are used for partial rollback.
-
Optimistic locking: A concurrency approach that assumes conflicts are less common and uses row versioning (or conflict detection) to reduce
blocking for readers.
-
Pessimistic locking: A concurrency approach that uses locks to prevent conflicts up front, which can increase blocking under contention.
-
Timeout: A limit on how long a session will wait for a lock before returning an error (often configured at the client, session, or statement level).
-
Two-phase commit: A distributed commit protocol that coordinates “prepare” then “commit/abort” across participants to preserve atomicity.
-
Transaction: A set of SQL operations treated as one logical unit of work that either commits or rolls back as a whole.
In the next module, SQL Server events will be discussed.
Transaction Locks Quiz
Before moving on, click the Quiz link below to check your knowledge of the material covered in this module with a short, multiple-choice quiz.
Transaction Locks Quiz
Transaction Locks - Quiz
Before moving on to the next module, click the Quiz link below to check your knowledge of the material covered in this module with a short, multiple-choice quiz.
Transaction Locks - Quiz
