|
Transactions can be isolated from each other to different degrees. Repeatable reads provide the most isolation, and mean that, for the life of the transaction, every time a thread of control reads a data item, it will be unchanged from its previous value (assuming, of course, the thread of control does not itself modify the item). Berkeley DB enforces repeatable reads whenever database reads are wrapped in transactions.
Most applications do not need to enclose reads in transactions, and when possible, transactionally protected reads should be avoided as they can cause performance problems. For example, a transactionally protected cursor sequentially reading each key/data pair in a database, will acquire a read lock on most of the pages in the database and so will gradually block all write operations on the databases until the transaction commits or aborts. Note, however, that if there are update transactions present in the application, the read operations must still use locking, and must be prepared to repeat any operation (possibly closing and reopening a cursor) that fails with a return value of DB_LOCK_DEADLOCK. Applications that need repeatable reads are ones that require the ability to repeatedly access a data item knowing that it will not have changed (for example, an operation modifying a data item based on its existing value).
Berkeley DB optionally supports reading uncommitted data; that is, read operations may request data which has been modified but not yet committed by another transaction. This is done by first specifying the DB_DIRTY_READ flag when opening the underlying database, and then specifying the DB_DIRTY_READ flag when beginning a transaction, opening a cursor, or performing a read operation. The advantage of using DB_DIRTY_READ is that read operations will not block when another transaction holds a write lock on the requested data; the disadvantage is that read operations may return data that will disappear should the transaction holding the write lock abort.