Read Committed Transactions

On this page Carat arrow pointing down
Note:

This is an enterprise-only feature. Request a 30-day trial license to try it out.

READ COMMITTED is one of two transaction isolation levels supported on CockroachDB. By default, CockroachDB uses the SERIALIZABLE isolation level, which is the strongest ANSI transaction isolation level.

READ COMMITTED isolation is appropriate in the following scenarios:

Whereas SERIALIZABLE isolation guarantees data correctness by placing transactions into a serializable ordering, READ COMMITTED isolation permits some concurrency anomalies in exchange for minimizing transaction aborts, retries, and blocking. Compared to SERIALIZABLE transactions, READ COMMITTED transactions do not return serialization errors that require client-side handling. See READ COMMITTED transaction behavior.

If your workload is already running well under SERIALIZABLE isolation, Cockroach Labs does not recommend changing to READ COMMITTED isolation unless there is a specific need.

Note:

READ COMMITTED on CockroachDB provides stronger isolation than READ COMMITTED on PostgreSQL. On CockroachDB, READ COMMITTED prevents anomalies within single statements. For complete details on how READ COMMITTED is implemented on CockroachDB, see the Read Committed RFC.

Enable READ COMMITTED isolation

By default, the sql.txn.read_committed_isolation.enabled cluster setting is true, enabling READ COMMITTED transactions. If the cluster setting is false, READ COMMITTED transactions will run as SERIALIZABLE.

Tip:

To check whether any transactions are being upgraded to SERIALIZABLE, see the Upgrades of SQL Transaction Isolation Level graph in the DB Console.

Set the default isolation level to READ COMMITTED

To set all future transactions to run at READ COMMITTED isolation, use one of the following options:

  • The SET SESSION CHARACTERISTICS statement, which applies to the current session:

    icon/buttons/copy
    SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ COMMITTED;
    
  • The default_transaction_isolation session variable:

    At the session level:

    icon/buttons/copy
    SET default_transaction_isolation = 'read committed';
    

    At the database level:

    icon/buttons/copy
    ALTER DATABASE db SET default_transaction_isolation = 'read committed';
    

    At the role level:

    icon/buttons/copy
    ALTER ROLE foo SET default_transaction_isolation = 'read committed';
    
  • The default_transaction_isolation session variable as a connection parameter with cockroach sql:

    icon/buttons/copy
    cockroach sql -–url='postgresql://{username}@{host}:{port}/{database}?options=-c default_transaction_isolation=read%20committed'
    

To view the default isolation level of the session:

icon/buttons/copy
SHOW default_transaction_isolation;
    default_transaction_isolation
-----------------------------------------------------
    read committed

Set the current transaction to READ COMMITTED

To begin a transaction as a READ COMMITTED transaction, use one of the following options:

  • The BEGIN TRANSACTION ISOLATION LEVEL statement:

    icon/buttons/copy
    BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
    
  • The SET TRANSACTION ISOLATION LEVEL statement, at the beginning of the transaction:

    icon/buttons/copy
    BEGIN;
      SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    
  • The transaction_isolation session variable, at the beginning of the transaction:

    icon/buttons/copy
    BEGIN;
      SET transaction_isolation = 'read committed';
    

To view the isolation level of a transaction, run SHOW within the open transaction:

icon/buttons/copy
SHOW transaction_isolation;
  transaction_isolation
-------------------------
  read committed
Tip:

Starting a transaction as READ COMMITTED does not affect the default isolation level, which can be different.

READ COMMITTED transaction behavior

READ COMMITTED and SERIALIZABLE transactions both serve globally consistent ("non-stale") reads and commit atomically. READ COMMITTED transactions have the following differences:

  • Writes in concurrent READ COMMITTED transactions can interleave without aborting transactions, and a write can never block a non-locking read of the same row. This is because READ COMMITTED transactions are not required to be placed into a serializable ordering.

  • Whereas statements in SERIALIZABLE transactions see data that committed before the transaction began, statements in READ COMMITTED transactions see data that committed before each statement began. If rows are being updated by concurrent writes, reads in a READ COMMITTED transaction can return different results.

    Note:

    For details on how this is implemented, see Read snapshots.

  • Due to the preceding behaviors, READ COMMITTED transactions permit some types of concurrency anomalies that are prevented in SERIALIZABLE transactions. For details and examples, see Concurrency anomalies.

  • You can mitigate concurrency anomalies by issuing locking reads in READ COMMITTED transactions. These statements can block concurrent transactions that are issuing writes or other locking reads on the same rows.

  • When using READ COMMITTED isolation, you do not need to implement client-side retries to handle serialization errors under transaction contention. READ COMMITTED transactions never return RETRY_SERIALIZABLE errors, and will only return 40001 errors in limited cases, as described in the following points.

READ COMMITTED transactions can abort in certain scenarios:

  • Transactions at all isolation levels are subject to lock contention, where a transaction attempts to lock a row that is already locked by a write or locking read. In such cases, the later transaction is blocked until the earlier transaction commits or rolls back, thus releasing its lock on the row. Lock contention that produces a deadlock between two transactions will result in a transaction abort and a 40001 error (ABORT_REASON_ABORTED_RECORD_FOUND or ABORT_REASON_PUSHER_ABORTED) returned to the client.

  • Constraint violations will abort transactions at all isolation levels.

  • In rare cases under READ COMMITTED isolation, a RETRY_WRITE_TOO_OLD or ReadWithinUncertaintyIntervalError error can be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the sql.defaults.results_buffer.size cluster setting.

Concurrency anomalies

Statements in concurrent READ COMMITTED transactions can interleave with each other. This can create concurrency anomalies that are not permitted under SERIALIZABLE isolation, which places concurrent transactions into a serializable ordering.

Tip:

The behaviors described in this section assume the use of non-locking reads. You can prevent concurrency anomalies through the selective use of locking reads, which can also increase latency due to lock contention.

Non-repeatable reads and phantom reads

READ COMMITTED transactions can serve different reads over the course of a transaction.

Non-repeatable reads return different row values because a concurrent transaction updated the values in between reads:

  1. Transaction A reads row R at timestamp 1.
  2. Transaction B writes to row R and commits at timestamp 2.
  3. Transaction A reads row R and gets a different result at timestamp 3.

Phantom reads return different rows because a concurrent transaction changed the set of rows that satisfy the row search:

  1. Transaction A reads the set of rows S at timestamp 1.
  2. Transaction B inserts, deletes, or updates rows in S and commits at timestamp 2.
  3. Transaction A reads the set of rows S and gets a different result at timestamp 3.

Whereas statements in SERIALIZABLE transactions see data that committed before the transaction began, statements in READ COMMITTED transactions see data that committed before each statement began.

Note:

For details on how this is implemented, see Read snapshots.

Example: Non-repeatable reads and phantom reads

Session 1
Session 2
In a terminal window (Session 1), create a table and insert some values:

icon/buttons/copy

  CREATE TABLE kv (k INT PRIMARY KEY, v INT);

icon/buttons/copy

  INSERT INTO kv VALUES (1, 2);

Begin a READ COMMITTED transaction and read a table row:

icon/buttons/copy

  BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
    SELECT * FROM kv WHERE v = 2;
    k | v
  ----+----
    1 | 2

In a new terminal window (Session 2), begin another READ COMMITTED transaction:

icon/buttons/copy

  BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Update the table row, insert a new row, and commit the transaction:

icon/buttons/copy

  UPDATE kv SET k = 2 WHERE v = 2;
    INSERT INTO kv VALUES (3, 2);
    COMMIT;

In Session 1, issue the read again:

icon/buttons/copy

  SELECT * FROM kv WHERE v = 2;
    k | v
  ----+----
    2 | 2
    3 | 2

Lost update anomaly

The READ COMMITTED conditions that permit non-repeatable reads and phantom reads also permit lost update anomalies, where an update from a transaction appears to be "lost" because it is overwritten by a concurrent transaction:

  1. Transaction A reads row R at timestamp 1.
  2. Transaction B writes to row R and commits at timestamp 2.
  3. Transaction A writes to row R and commits at timestamp 3.

The value of R has changed while transaction A is open. However, A can still write to R and commit, effectively overwriting the update from transaction B.

Note:

Under SERIALIZABLE isolation, transaction A would have aborted with a RETRY_WRITE_TOO_OLD error, prompting the client to retry the transaction.

Example: Lost update anomaly

Session 1
Session 2
In a terminal window (Session 1), create a table and insert some values:

icon/buttons/copy

  CREATE TABLE kv (k INT PRIMARY KEY, v INT);

icon/buttons/copy

  INSERT INTO kv VALUES (1, 2);

Begin a READ COMMITTED transaction and read a table row:

icon/buttons/copy

  BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
    SELECT * FROM kv WHERE k = 1;
    k | v
  ----+----
    1 | 2

In a new terminal window (Session 2), begin another READ COMMITTED transaction:

icon/buttons/copy

  BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Update the table row and commit the transaction:

icon/buttons/copy

  UPDATE kv SET v = 3 WHERE k = 1;
    COMMIT;

In Session 1, update the table row again and commit the transaction:

icon/buttons/copy

  UPDATE kv SET v = 4 WHERE k = 1;
    COMMIT;

Read the table row and see that it reflects the update from Session 1:

icon/buttons/copy

  SELECT * FROM kv WHERE k = 1;
    k | v
  ----+----
    1 | 4

The update in Session 2 appears to be "lost" because its result is overwritten by a concurrent transaction. It is not lost at the database level, and can be found using AS OF SYSTEM TIME and a timestamp earlier than the commit in Session 1:

icon/buttons/copy
SELECT * FROM kv AS OF SYSTEM TIME '2023-11-09 21:22:10' WHERE k = 1;
  k | v
----+----
  1 | 3
Note:

While concurrent READ COMMITTED transactions can have their committed writes overwritten, uncommitted writes in READ COMMITTED transactions cannot be overwritten.

Write skew anomaly

The following sequence of operations on a table is possible under READ COMMITTED isolation:

  1. Transaction A reads row R at timestamp 1.
  2. Transaction B reads row S at timestamp 2.
  3. Transaction A writes to row S and commits at timestamp 3.
  4. Transaction B writes to row R and commits at timestamp 4.

Transaction A updates the value of S based on the R value it reads at timestamp 1. Transaction B updates the value of R based on the S value it reads at timestamp 2. The value of S has changed while transaction B is open, but B can still write and commit instead of aborting, since READ COMMITTED transactions do not require serializability. This is the basis of potential write skew anomalies where two concurrent transactions each read values that the other subsequently updates.

Note:

For details on why this is allowed, see Read refreshing.

Example: Write skew anomaly

For an example of how a write skew anomaly can occur, see Demonstrate interleaved statements in READ COMMITTED transactions.

Locking reads

To reduce the occurrence of concurrency anomalies in READ COMMITTED isolation, you can strengthen the isolation of individual reads by using SELECT ... FOR UPDATE or SELECT ... FOR SHARE to issue locking reads on specific rows. Locking reads behave similarly to writes: they lock qualifying rows to prevent concurrent writes from modifying them until the transaction commits. Conversely, if a locking read finds that a row is exclusively locked by a concurrent transaction, it waits for the other transaction to commit or rollback before proceeding. A locking read in a transaction will always have the latest version of a row when the transaction commits.

The clause used with the SELECT statement determines the lock strength of a locking read:

  • SELECT FOR UPDATE obtains an exclusive lock on each qualifying row, blocking concurrent writes and locking reads on the row. Only one transaction can hold an exclusive lock on a row at a time, and only the transaction holding the exclusive lock can write to the row. For an example, see Reserve rows for updates using exclusive locks.

  • SELECT FOR SHARE obtains a shared lock on each qualifying row, blocking concurrent writes and exclusive locking reads on the row. Multiple transactions can hold a shared lock on a row at the same time. When multiple transactions hold a shared lock on a row, none can write to the row. A shared lock grants transactions mutual read-only access to a row, and ensures that they read the latest version of the row. For an example, see Reserve values using shared locks.

When a SELECT FOR UPDATE or SELECT FOR SHARE read is issued on a row, only the latest version of the row is returned to the client. Under READ COMMITTED isolation, neither statement will block concurrent, non-locking reads.

When to use locking reads

Use locking reads in your application if certain READ COMMITTED transactions must guarantee that the data they access will not be changed by intermediate writes.

Non-locking reads can allow intermediate writes to update rows before READ COMMITTED transactions commit, potentially creating concurrency anomalies. Locking reads prevent such anomalies, but increase the amount of lock contention that may require intervention if latency becomes too high. Note that locking reads do not prevent phantom reads that are caused by the insertion of new rows, since only existing rows can be locked.

Note:

Locking reads are not effective for emulating SERIALIZABLE transactions, which can avoid locking reads because they always retry or abort if reads are not current. As a result, READ COMMITTED transactions that use locking reads will perform differently than SERIALIZABLE transactions at various levels of concurrency.

To use locking reads:

  • If you need to read and later update a row within a transaction, use SELECT ... FOR UPDATE to acquire an exclusive lock on the row. This guarantees data integrity between the transaction's read and write operations.

  • If you need to read the latest version of a row, and later update a different row within a transaction, use SELECT ... FOR SHARE to acquire a shared lock on the row. This blocks all concurrent writes on the row without unnecessarily blocking concurrent reads or other SELECT ... FOR SHARE queries.

    Tip:

    This allows an application to build cross-row consistency constraints by ensuring that rows that are read in a READ COMMITTED transaction will not change before the writes in the same transaction have been committed.

Examples

In this scenario:

  • A hospital has an application for doctors to manage their on-call shifts.
  • The hospital has a rule that at least one doctor must be on call at any one time.
  • Two doctors are on call for a particular shift, and both of them try to request leave for the shift in two concurrent transactions.
  • Under the READ COMMITTED isolation level, the write skew anomaly anomaly can potentially result in both doctors successfully booking leave and the hospital having no doctors on call for that particular shift.

The following examples demonstrate how to:

Before you begin

  1. Open the SQL shell using cockroach demo.

  2. Enable READ COMMITTED transactions:

    icon/buttons/copy
    SET CLUSTER SETTING sql.txn.read_committed_isolation.enabled = 'true';
    
  3. Create the doctors table:

    icon/buttons/copy
    CREATE TABLE doctors (
      id INT PRIMARY KEY,
      name TEXT
    );
    
  4. Create the schedules table:

    icon/buttons/copy
    CREATE TABLE schedules (
      day DATE,
      doctor_id INT REFERENCES doctors (id),
      on_call BOOL,
      PRIMARY KEY (day, doctor_id)
    );
    
  5. Add two doctors to the doctors table:

    icon/buttons/copy
    INSERT INTO doctors VALUES
      (1, 'Abe'),
      (2, 'Betty');
    
  6. Insert one week's worth of data into the schedules table:

    icon/buttons/copy
    INSERT INTO schedules VALUES
      ('2023-12-01', 1, true),
      ('2023-12-01', 2, true),
      ('2023-12-02', 1, true),
      ('2023-12-02', 2, true),
      ('2023-12-03', 1, true),
      ('2023-12-03', 2, true),
      ('2023-12-04', 1, true),
      ('2023-12-04', 2, true),
      ('2023-12-05', 1, true),
      ('2023-12-05', 2, true),
      ('2023-12-06', 1, true),
      ('2023-12-06', 2, true),
      ('2023-12-07', 1, true),
      ('2023-12-07', 2, true);
    

Demonstrate interleaved statements in READ COMMITTED transactions

Before proceeding, reset the example scenario:

icon/buttons/copy
UPDATE schedules SET on_call = true WHERE on_call = false;

Confirm that at least one doctor is on call each day of the week:

icon/buttons/copy
SELECT day, count(*) AS on_call FROM schedules
  WHERE on_call = true
  GROUP BY day
  ORDER BY day;  
     day     | on_call
-------------+----------
  2023-12-01 |       2
  2023-12-02 |       2
  2023-12-03 |       2
  2023-12-04 |       2
  2023-12-05 |       2
  2023-12-06 |       2
  2023-12-07 |       2

Session 1
Session 2
Doctor 1, Abe, starts to request leave for 2023-12-05 using the hospital's schedule management application.

Start a transaction:

icon/buttons/copy
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Check to make sure that another doctor is on call for 2023-12-05:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05';
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    t
  2023-12-05 |         2 |    t

Around the same time, Doctor 2, Betty, starts to request leave for the same day using the hospital's schedule management application.

In a new terminal (Session 2), open the SQL shell on your cockroach demo cluster. Start a transaction:

icon/buttons/copy
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Check to make sure that another doctor is on call for 2023-12-05:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05';
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    t
  2023-12-05 |         2 |    t

In Session 1, the previous read confirmed that another doctor is available on 2023-12-05. Update the schedule to put Abe on leave:

icon/buttons/copy
UPDATE schedules SET on_call = false
  WHERE day = '2023-12-05'
  AND doctor_id = 1;

Read the rows for 2023-12-05. Session 1 sees that only Abe is on leave once its transaction commits:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05';
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    f
  2023-12-05 |         2 |    t

In Session 2, the previous read confirmed that another doctor is available on 2023-12-05. Update the schedule to put Betty on leave:

icon/buttons/copy
UPDATE schedules SET on_call = false
  WHERE day = '2023-12-05'
  AND doctor_id = 2;

Read the rows for 2023-12-05. Session 2 sees that only Betty is on leave once its transaction commits:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05';
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    t
  2023-12-05 |         2 |    f

In Session 1, commit the transaction:

icon/buttons/copy
COMMIT;

By design under READ COMMITTED isolation, CockroachDB allows the transaction to commit even though its previous read (the SELECT query) has changed due to the concurrent transaction in Session 2.

In Session 2, read the rows for 2023-12-05 again:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05';
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    f
  2023-12-05 |         2 |    f

The result has changed because Session 1 committed earlier and updated the on_call value for doctor 1, thus changing the read result for the transaction in Session 2.

If the transaction in Session 2 commits and updates the on_call value for Betty, this will create a write skew anomaly. The result would be that neither Abe nor Betty is scheduled to be on call on 2023-12-05.

Instead, the transaction should rollback so that the write skew anomaly does not commit:

icon/buttons/copy
ROLLBACK;

Reserve rows for updates using exclusive locks

Before proceeding, reset the example scenario:

icon/buttons/copy
UPDATE schedules SET on_call = true WHERE on_call = false;

Confirm that at least one doctor is on call each day of the week:

icon/buttons/copy
SELECT day, count(*) AS on_call FROM schedules
  WHERE on_call = true
  GROUP BY day
  ORDER BY day;  
     day     | on_call
-------------+----------
  2023-12-01 |       2
  2023-12-02 |       2
  2023-12-03 |       2
  2023-12-04 |       2
  2023-12-05 |       2
  2023-12-06 |       2
  2023-12-07 |       2

Session 1
Session 2
Doctor 1, Abe, starts to request leave for 2023-12-05 using the hospital's schedule management application.

Start a transaction:

icon/buttons/copy
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Check to make sure that another doctor is on call for 2023-12-05. Use FOR UPDATE to lock the rows so that only the current transaction can update them:

Tip:
Include an ORDER BY clause to force locking to occur in a specific order. This prevents potential deadlock with another locking read on the same rows, which can cause the transaction to abort.

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05'
  ORDER BY doctor_id
  FOR UPDATE;
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    t
  2023-12-05 |         2 |    t

Around the same time, Doctor 2, Betty, starts to request leave for the same day using the hospital's schedule management application.

In a new terminal (Session 2), open the SQL shell on your cockroach demo cluster. Start a transaction:

icon/buttons/copy
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Check to make sure that another doctor is on call for 2023-12-05. Use FOR UPDATE to lock the rows so that only the current transaction can update them:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05'
  ORDER BY doctor_id
  FOR UPDATE;

However, because Session 1 has already acquired an exclusive lock on these rows, the current transaction is blocked until Session 1 releases its lock.

In Session 1, the previous read confirmed that another doctor is available on 2023-12-05. Update the schedule to put Abe on leave:

icon/buttons/copy
UPDATE schedules SET on_call = false
  WHERE day = '2023-12-05'
  AND doctor_id = 1;

Commit the transaction:

icon/buttons/copy
COMMIT;

Once the transaction in Session 1 commits, it releases its exclusive lock. Session 2 can read the rows for 2023-12-05, which show that Abe has already been put on leave for that day:

     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    f
  2023-12-05 |         2 |    t

Rollback the transaction:

icon/buttons/copy
ROLLBACK;

Reserve row values using shared locks

Before proceeding, reset the example scenario:

icon/buttons/copy
UPDATE schedules SET on_call = true WHERE on_call = false;

Confirm that at least one doctor is on call each day of the week:

icon/buttons/copy
SELECT day, count(*) AS on_call FROM schedules
  WHERE on_call = true
  GROUP BY day
  ORDER BY day;  
     day     | on_call
-------------+----------
  2023-12-01 |       2
  2023-12-02 |       2
  2023-12-03 |       2
  2023-12-04 |       2
  2023-12-05 |       2
  2023-12-06 |       2
  2023-12-07 |       2

Session 1
Session 2
Doctor 1, Abe, starts to request leave for 2023-12-05 using the hospital's schedule management application.

Start a transaction:

icon/buttons/copy
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Check to make sure that another doctor is on call for 2023-12-05. Use FOR SHARE to lock the rows so that they cannot be updated by another transaction:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05'
  FOR SHARE;
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    t
  2023-12-05 |         2 |    t

Around the same time, Doctor 2, Betty, starts to request leave for the same day using the hospital's schedule management application.

In a new terminal (Session 2), open the SQL shell on your cockroach demo cluster. Start a transaction:

icon/buttons/copy
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;

Check to make sure that another doctor is on call for 2023-12-05. Use FOR SHARE to lock the rows so that they cannot be updated by another transaction:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05'
  FOR SHARE;
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    t
  2023-12-05 |         2 |    t

Shared locks are typically used when a transaction needs to read the latest version of a row, but does not need to update the row. With the rows locked by both Sessions 1 and 2, a third Session 3 is blocked from updating the rows:

icon/buttons/copy
UPDATE schedules SET on_call = false
  WHERE day = '2023-12-05'
  AND doctor_id = 1;

Once both Sessions 1 and 2 commit or rollback their transactions, Session 3 can complete the update to place Abe on leave:

UPDATE 1
icon/buttons/copy
COMMIT;

Read the rows for 2023-12-05 and confirm that Betty is still on call:

icon/buttons/copy
SELECT * FROM schedules
  WHERE day = '2023-12-05';
     day     | doctor_id | on_call
-------------+-----------+----------
  2023-12-05 |         1 |    f
  2023-12-05 |         2 |    t

Known limitations

  • Schema changes (e.g., CREATE TABLE, CREATE SCHEMA, CREATE INDEX) cannot be performed within explicit READ COMMITTED transactions, and will cause transactions to abort. As a workaround, set the transaction's isolation level to SERIALIZABLE. #114778
  • READ COMMITTED transactions performing INSERT, UPDATE, or UPSERT cannot access REGIONAL BY ROW tables in which UNIQUE and PRIMARY KEY constraints exist, the region is not included in the constraint, and the region cannot be computed from the constraint columns.
  • Multi-column-family checks during updates are not supported under READ COMMITTED isolation. #112488
  • Because locks acquired by foreign key checks, SELECT FOR UPDATE, and SELECT FOR SHARE are fully replicated under READ COMMITTED isolation, some queries experience a delay for Raft replication.
  • Foreign key checks are not performed in parallel under READ COMMITTED isolation.
  • SELECT FOR UPDATE and SELECT FOR SHARE statements are less optimized under READ COMMITTED isolation than under SERIALIZABLE isolation. Under READ COMMITTED isolation, SELECT FOR UPDATE and SELECT FOR SHARE usually perform an extra lookup join for every locked table when compared to the same queries under SERIALIZABLE. In addition, some optimization steps (such as de-correlation of correlated subqueries) are not currently performed on these queries.
  • Regardless of isolation level, SELECT FOR UPDATE and SELECT FOR SHARE statements in CockroachDB do not prevent insertion of new rows matching the search condition (i.e., phantom reads). This matches PostgreSQL behavior at all isolation levels. #120673

See also


Yes No
On this page

Yes No