Chapter 7: Transactions

Many things can go wrong with systems, such as:

  • The software or hardware might fail at the middle of an operation
  • Network failures can cause unexpected cut offs in the application
  • Multiple clients might write at the same time causing overwrites
  • Reads might return incomplete writes

For decades, transactions are the way databases implemented mechanisms to avoid the previous faults and many more.

One way to describe transaction guarantees is using the ACID model. The acronym stands for Atomicity, Consistency, Isolation, and Durability. Let’s examine what they mean:

  • Atomicity can be understood as the capability of applying all operations or none, no partial writes should be seen.
  • Consistency means that the invariants of the database should hold in all times. Invariants might be foreign key checks for example.
  • Isolation means that other transactions would never see modifications made by transactions that did not commit yet.
  • Durability can be summarized as when the data is saved it’s guaranteed to be stored in disk (or whatever)

Also, operations can occur for single or many objects which in turn implies that transactions should support also operations that modify many objects at once.

Isolation Levels

The isolation levels are meant to solve concurrent transactions reading and writing to the same data. One of them is the serializable isolation level, which basically is executing all transactions as if they were executed one at a time. Usually, this level pays its price in terms of performance, so other weaker levels are used in practice.

The read committed level makes two guarantees: read-only values that are already committed and only overwrite data that is already committed.

On the other hand, the snapshot isolation level generalizes a bit the previous idea, it only reads data that have been committed before the current transaction started.

Preventing Lost Updates

There are some ways of preventing lost updates:

  • Atomic write operations: where the database provides operations that happen atomically
  • Explicit locking: the developer sets the lock by themselves with a specific keyword
  • Automatically detect lost updates provided by the database engine
  • Compare and set instructions which checks data before applying updates

Write Skew and Phantoms

Another concurrency problem is called write skew, which is basically the pattern: read some data and later taking some action based on the data. The problem here is that if this data can be modified by another transaction when the decision is to be made the data might be already out of data.

One way of solving this issue is by specifically locking the records or set of records. Another way is to materialize the conflicts into a table and use the regular row legal locking mechanisms.

Serializability

The easiest way to remove the problems we’ve seen previously is to remove concurrency at all. This is what serializability is about, it tries to execute all transactions in order, one at a time. To make this viable, every transaction must be small and fast, because one slow transaction may stall all transaction processing. Also, the dataset should fit into memory, otherwise, accessing the disk might make the transactions execute for a long time. In addition to that, a single CPU core must be able to handle all the transactions.

Two-Phase Locking (2PL)

The two-phase locking works by allowing multiple readers to read any given object, but at the time of a write, an exclusive lock must be hold. So when a writer needs access to an object it should acquire an exclusive lock. The implementation on MySQL (InnoDB) and Postgres are by using locks in two modes: shared mode and exclusive mode. This is different from snapshot isolation because in that mode, reads never block writes and writes never block reads.

The downside of this approach is that it has a significant impact on performance, comparing to weaker isolation levels.

There are the predicate locks and index locks are a way to lock exclusively a range of records. The predicate locks, lock all objects that might match any given criteria, on the other hand, the index locks are an approximation of the criteria, usually a bigger set.

Serializable Snapshot Isolation (SSI)

SSI is a recent (2008) algorithm that tries to provide full serializability with only a small performance penalty. The 2PL uses what is called a pessimistic lock and if a transaction wants to use a given set of objects, it won’t try executing its operations until is safe to do so. On the other hand, an optimistic lock approach will continue with the transaction and only abort when it detects something is not right.

SSI works by detecting MVCC changes during transactions. Let me put it this way, if a transaction reads a value, makes some computation, and then updates the value, and in the meantime, another transaction does the same thing, at the moment the commit is made, we can detect that the other transaction has changed the data and it is not safe to apply the changes.

Leave a Reply

Your email address will not be published. Required fields are marked *