This page describes newly identified limitations in the CockroachDB v22.1.22 release as well as unresolved limitations identified in earlier releases.
New limitations
A multi-region table cannot be restored into a non-multi-region table
You cannot restore a multi-region table into a non-multi-region table.
Statements containing multiple modification subqueries of the same table are disallowed
Statements containing multiple modification subqueries mutating the same row could cause corruption. These statements are disallowed by default, but you can enable multiple modification subqueries with one the following:
- Set the
sql.multiple_modifications_of_table.enabled
cluster setting totrue
. - Use the
enable_multiple_modifications_of_table
session variable.
Note that if multiple mutations inside the same statement affect different tables with FOREIGN KEY
relations and ON CASCADE
clauses between them, the results will be different from what is expected in PostgreSQL.
transaction_rows_read_err
and transaction_rows_written_err
do not halt query execution
The transaction_rows_read_err
and transaction_rows_written_err
session settings limit the number of rows read or written by a single transaction. These session settings will fail the transaction with an error, but not until the current query finishes executing and the results have been returned to the client.
sql.guardrails.max_row_size_err
misses indexed virtual computed columns
The sql.guardrails.max_row_size_err
cluster setting misses large rows caused by indexed virtual computed columns. This is because the guardrail only checks the size of primary key rows, not secondary index rows.
SQL-only command-line client executable limitations
- The SQL-only command-line client executable does not support environment variables, and therefore cannot be used to override the default connection parameters. Tracking GitHub Issue
- The SQL-only command-line client executable does not support the
--certs-dir
flag. To pass a PostgreSQL connection URL to the client, use the--url
flag. Tracking GitHub Issue
Row-Level TTL limitations
- You cannot use foreign keys to create references to or from a table that uses Row-Level TTL. cockroachdb/cockroach#76407
- Any queries you run against tables with Row-Level TTL enabled do not filter out expired rows from the result set (this includes
UPDATE
s andDELETE
s). This feature may be added in a future release. For now, follow the instructions in Filter out expired rows from a selection query. - The TTL cannot be customized based on the values of other columns in the row. cockroachdb/cockroach#76916
- Because of the above limitation, adding TTL to large existing tables can negatively affect performance, since a new column must be created and backfilled for every row. Creating a new table with a TTL is not affected by this limitation.
- The queries executed by Row-Level TTL are not yet optimized for performance:
- They do not use any indexes that may be available on the
crdb_internal_expiration
column. - They do not take into account node localities.
- All deletes are run on a single node, instead of being distributed.
- For details, see cockroachdb/cockroach#76914
- They do not use any indexes that may be available on the
- If you override the TTL for a row by setting
crdb_internal_expiration
directly, and the row is later updated (e.g., using anON UPDATE
expression), the TTL override is lost; it is reset tonow() + ttl_expire_after
.
Change data capture limitations
Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:
- Changefeeds cannot be backed up or restored. Tracking GitHub Issue
- Changefeed target options are limited to tables and column families. Tracking GitHub Issue
- Using a cloud storage sink only works with
JSON
and emits newline-delimited json files. Tracking GitHub Issue - Webhook sinks only support HTTPS. Use the
insecure_tls_skip_verify
parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. Tracking GitHub Issue - Webhook sinks and Google Cloud Pub/Sub sinks only have support for emitting
JSON
. Tracking GitHub Issue - There is no concurrency configurability for webhook sinks. Tracking GitHub Issue
- Using the
split_column_families
andresolved
options on the same changefeed will cause an error when using the following sinks: Kafka and Google Cloud Pub/Sub. Instead, use the individualFAMILY
keyword to specify column families when creating a changefeed. Tracking GitHub Issue - There is no configuration for unordered messages for Google Cloud Pub/Sub sinks. You must specify the
region
parameter in the URI to maintain ordering guarantees. Tracking GitHub Issue
SHOW BACKUP
does not work with locality-aware backups and the incremental_location
option
SHOW BACKUP
can display backups taken with the incremental_location
option or for locality-aware backups. It will not display backups for locality-aware backups taken with the incremental_location
option. Tracking GitHub issue.
Limited SQL cursor support
CockroachDB implements SQL cursor support with the following limitations:
DECLARE
only supports forward cursors. Reverse cursors created withDECLARE SCROLL
are not supported. cockroachdb/cockroach#77102FETCH
supports forward, relative, and absolute variants, but only for forward cursors. cockroachdb/cockroach#77102BINARY CURSOR
, which returns data in the Postgres binary format, is not supported. cockroachdb/cockroach#77099MOVE
, which allows advancing the cursor without returning any rows, is not supported. cockroachdb/cockroach#77100WITH HOLD
, which allows keeping a cursor open for longer than a transaction by writing its results into a buffer, is accepted as valid syntax within a single transaction but is not supported. It acts as a no-op and does not actually perform the function ofWITH HOLD
, which is to make the cursor live outside its parent transaction. Instead, if you are usingWITH HOLD
, you will be forced to close that cursor within the transaction it was created in. cockroachdb/cockroach#77101This syntax is accepted (but does not have any effect):
BEGIN; DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar; CLOSE test_cur; COMMIT;
This syntax is not accepted, and will result in an error:
BEGIN; DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar; COMMIT; -- This will fail with an error because CLOSE test_cur was not called inside the transaction.
Scrollable cursor (also known as reverse
FETCH
) is not supported.SELECT ... FOR UPDATE
with a cursor is not supported. cockroachdb/cockroach#77103Respect for
SAVEPOINT
s is not supported. Cursor definitions do not disappear properly if rolled back to aSAVEPOINT
from before they were created. cockroachdb/cockroach#77104
Unresolved limitations
SELECT FOR UPDATE
locks are dropped on lease transfers and range splits/merges
Locks acquired using SELECT ... FOR UPDATE
are dropped on lease transfers and range splits and merges. SELECT ... FOR UPDATE
locks should be thought of as best-effort, and should not be relied upon for correctness, as they are implemented as fast, in-memory unreplicated locks.
If a lease transfer or range split/merge occurs on a range held by an unreplicated lock, the lock is dropped, and the following behaviors can occur:
- The desired ordering of concurrent accesses to one or more rows of a table expressed by your use of
SELECT ... FOR UPDATE
may not be preserved (that is, a transaction B against some table T that was supposed to wait behind another transaction A operating on T may not wait for transaction A). - The transaction that acquired the (now dropped) unreplicated lock may fail to commit, leading to transaction retry errors with code
40001
and therestart transaction
error message.
We intend to improve the reliability of these locks. For details, see cockroachdb/cockroach#75456.
Note that serializable isolation is preserved despite this limitation.
CockroachDB does not properly optimize some left and anti joins with GIN indexes
Left joins and anti joins involving JSONB
, ARRAY
, or spatial-typed columns with a multi-column or partitioned GIN index will not take advantage of the index if the prefix columns of the index are unconstrained, or if they are constrained to multiple, constant values.
To work around this limitation, make sure that the prefix columns of the index are either constrained to single constant values, or are part of an equality condition with an input column (e.g., col1 = col2
, where col1
is a prefix column and col2
is an input column).
For example, suppose you have the following multi-region database and tables:
CREATE DATABASE multi_region_test_db PRIMARY REGION "europe-west1" REGIONS "us-west1", "us-east1" SURVIVE REGION FAILURE;
USE multi_region_test_db;
CREATE TABLE t1 (
k INT PRIMARY KEY,
geom GEOMETRY
);
CREATE TABLE t2 (
k INT PRIMARY KEY,
geom GEOMETRY,
INVERTED INDEX geom_idx (geom)
) LOCALITY REGIONAL BY ROW;
And you insert some data into the tables:
INSERT INTO t1 SELECT generate_series(1, 1000), 'POINT(1.0 1.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'us-east1', generate_series(1, 1000), 'POINT(1.0 1.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'us-west1', generate_series(1001, 2000), 'POINT(2.0 2.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'europe-west1', generate_series(2001, 3000), 'POINT(3.0 3.0)';
If you attempt a left join between t1
and t2
on only the geometry columns, CockroachDB will not be able to plan an inverted join:
> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom);
info
------------------------------------
distribution: full
vectorized: true
• cross join (right outer)
│ pred: st_contains(geom, geom)
│
├── • scan
│ estimated row count: 3,000
│ table: t2@primary
│ spans: FULL SCAN
│
└── • scan
estimated row count: 1,000
table: t1@primary
spans: FULL SCAN
(15 rows)
However, if you constrain the crdb_region
column to a single value, CockroachDB can plan an inverted join:
> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-east1';
info
--------------------------------------------------
distribution: full
vectorized: true
• lookup join (left outer)
│ table: t2@primary
│ equality: (crdb_region, k) = (crdb_region,k)
│ equality cols are key
│ pred: st_contains(geom, geom)
│
└── • inverted join (left outer)
│ table: t2@geom_idx
│
└── • render
│
└── • scan
estimated row count: 1,000
table: t1@primary
spans: FULL SCAN
(18 rows)
If you do not know which region to use, you can combine queries with UNION ALL
:
> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-east1'
UNION ALL SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-west1'
UNION ALL SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'europe-west1';
info
----------------------------------------------------------
distribution: full
vectorized: true
• union all
│
├── • union all
│ │
│ ├── • lookup join (left outer)
│ │ │ table: t2@primary
│ │ │ equality: (crdb_region, k) = (crdb_region,k)
│ │ │ equality cols are key
│ │ │ pred: st_contains(geom, geom)
│ │ │
│ │ └── • inverted join (left outer)
│ │ │ table: t2@geom_idx
│ │ │
│ │ └── • render
│ │ │
│ │ └── • scan
│ │ estimated row count: 1,000
│ │ table: t1@primary
│ │ spans: FULL SCAN
│ │
│ └── • lookup join (left outer)
│ │ table: t2@primary
│ │ equality: (crdb_region, k) = (crdb_region,k)
│ │ equality cols are key
│ │ pred: st_contains(geom, geom)
│ │
│ └── • inverted join (left outer)
│ │ table: t2@geom_idx
│ │
│ └── • render
│ │
│ └── • scan
│ estimated row count: 1,000
│ table: t1@primary
│ spans: FULL SCAN
│
└── • lookup join (left outer)
│ table: t2@primary
│ equality: (crdb_region, k) = (crdb_region,k)
│ equality cols are key
│ pred: st_contains(geom, geom)
│
└── • inverted join (left outer)
│ table: t2@geom_idx
│
└── • render
│
└── • scan
estimated row count: 1,000
table: t1@primary
spans: FULL SCAN
(54 rows)
Using RESTORE
with multi-region table localities
Restoring
GLOBAL
andREGIONAL BY TABLE
tables into a non-multi-region database is not supported. Tracking GitHub IssueREGIONAL BY TABLE
andREGIONAL BY ROW
tables can be restored only if the regions of the backed-up table match those of the target database. All of the following must be true forRESTORE
to be successful:- The regions of the source database and the regions of the destination database have the same set of regions.
- The regions were added to each of the databases in the same order.
- The databases have the same primary region.
The following example would be considered as having mismatched regions because the database regions were not added in the same order and the primary regions do not match.
Running on the source database:
ALTER DATABASE source_database SET PRIMARY REGION "us-east1";
ALTER DATABASE source_database ADD region "us-west1";
Running on the destination database:
ALTER DATABASE destination_database SET PRIMARY REGION "us-west1";
ALTER DATABASE destination_database ADD region "us-east1";
In addition, the following scenario has mismatched regions between the databases since the regions were not added to the database in the same order.
Running on the source database:
ALTER DATABASE source_database SET PRIMARY REGION "us-east1";
ALTER DATABASE source_database ADD region "us-west1";
Running on the destination database:
ALTER DATABASE destination_database SET PRIMARY REGION "us-west1";
ALTER DATABASE destination_database ADD region "us-east1";
ALTER DATABASE destination_database SET PRIMARY REGION "us-east1";
SET
does not ROLLBACK
in a transaction
SET
does not properly apply ROLLBACK
within a transaction. For example, in the following transaction, showing the TIME ZONE
variable does not return 2
as expected after the rollback:
SET TIME ZONE +2;
BEGIN;
SET TIME ZONE +3;
ROLLBACK;
SHOW TIME ZONE;
timezone
------------
3
JSONB
/JSON
comparison operators are not implemented
You cannot use comparison operators (such as <
or >
) on JSONB
elements. For example, the following query does not work and returns an error:
SELECT '{"a": 1}'::JSONB -> 'a' < '{"b": 2}'::JSONB -> 'b';
ERROR: unsupported comparison operator: <jsonb> < <jsonb>
SQLSTATE: 22023
Locality-optimized search works only for queries selecting a limited number of records
- Locality optimized search works only for queries selecting a limited number of records (up to 100,000 unique keys).
Expression indexes cannot reference computed columns
CockroachDB does not allow expression indexes to reference computed columns.
Materialized view limitations
The optimizer may not select the most optimal query plan when querying materialized views because CockroachDB does not collect statistics on materialized views.
CockroachDB cannot refresh materialized views inside explicit transactions. Trying to refresh a materialized view inside an explicit transaction will result in an error.
Start
cockroach demo
with the samplebank
data set:cockroach demo bank
Create the materialized view described in Usage.
Start a new multi-statement transaction with
BEGIN TRANSACTION
:BEGIN TRANSACTION;
Inside the open transaction, attempt to refresh the view. This will result in an error.
REFRESH MATERIALIZED VIEW overdrawn_accounts;
ERROR: cannot refresh view in an explicit transaction SQLSTATE: 25000
CockroachDB cannot plan locality optimized searches that use partitioned unique indexes on virtual computed columns
- Locality optimized search does not work for queries that use partitioned unique indexes on virtual computed columns. A workaround for computed columns is to make the virtual computed column a stored computed column. Locality optimized search does not work for queries that use partitioned unique expression indexes. Tracking GitHub Issue
Expressions as ON CONFLICT
targets are not supported
CockroachDB does not support expressions as ON CONFLICT
targets. This means that unique expression indexes cannot be selected as arbiters for INSERT .. ON CONFLICT
statements. For example:
CREATE TABLE t (a INT, b INT, UNIQUE INDEX ((a + b)));
CREATE TABLE
INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING;
invalid syntax: statement ignored: at or near "(": syntax error
SQLSTATE: 42601
DETAIL: source SQL:
INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING
^
HINT: try \h INSERT
INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10;
invalid syntax: statement ignored: at or near "(": syntax error
SQLSTATE: 42601
DETAIL: source SQL:
INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10
^
HINT: try \h INSERT
Optimizer stale statistics deletion when columns are dropped
When a column is dropped from a multi-column index, the optimizer will not collect new statistics for the deleted column. However, the optimizer never deletes the old multi-column statistics. This can cause a buildup of statistics in
system.table_statistics
leading the optimizer to use stale statistics, which could result in sub-optimal plans. To workaround this issue and avoid these scenarios, explicitly delete those statistics from thesystem.table_statistics
table.Single-column statistics are not deleted when columns are dropped, which could cause minor performance issues.
Automatic statistics refresher may not refresh after upgrade
The automatic statistics refresher automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an upgrade. This could cause a problem, for example, if the upgrade moves from a version without histograms to a version with histograms. To refresh statistics manually, use CREATE STATISTICS
.
Differences in syntax and behavior between CockroachDB and PostgreSQL
CockroachDB supports the PostgreSQL wire protocol and the majority of its syntax. However, CockroachDB does not support some of the PostgreSQL features or behaves differently from PostgreSQL because not all features can be easily implemented in a distributed system.
For a list of known differences in syntax and behavior between CockroachDB and PostgreSQL, see Features that differ from PostgreSQL.
Multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE
CockroachDB does not currently support multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE
, and will return an error if there are multiple unique or exclusion constraints matching the ON CONFLICT DO UPDATE
specification.
IMPORT
into a table with partial indexes
CockroachDB does not currently support IMPORT
s into tables with partial indexes.
To work around this limitation:
- Drop any partial indexes defined on the table.
- Perform the
IMPORT
. - Recreate the partial indexes.
If you are performing an IMPORT
of a PGDUMP
with partial indexes:
- Drop the partial indexes on the PostgreSQL server.
- Recreate the
PGDUMP
. IMPORT
thePGDUMP
.- Add partial indexes on the CockroachDB server.
Spatial support limitations
CockroachDB supports efficiently storing and querying spatial data, with the following limitations:
Not all PostGIS spatial functions are supported.
The
AddGeometryColumn
spatial function only allows constant arguments.The
AddGeometryColumn
spatial function only allows thetrue
value for itsuse_typmod
parameter.CockroachDB does not support the
@
operator. Instead of using@
in spatial expressions, we recommend using the inverse, with~
. For example, instead ofa @ b
, useb ~ a
.CockroachDB does not yet support
INSERT
s into thespatial_ref_sys
table. This limitation also blocks theogr2ogr -f PostgreSQL
file conversion command.CockroachDB does not yet support Triangle or
TIN
spatial shapes.CockroachDB does not yet support Curve, MultiCurve, or CircularString spatial shapes.
CockroachDB does not yet support k-nearest neighbors.
CockroachDB does not support using schema name prefixes to refer to data types with type modifiers (e.g.,
public.geometry(linestring, 4326)
). Instead, use fully-unqualified names to refer to data types with type modifiers (e.g.,geometry(linestring,4326)
).Note that, in
IMPORT PGDUMP
output,GEOMETRY
andGEOGRAPHY
data type names are prefixed bypublic.
. If the type has a type modifier, you must remove thepublic.
from the type name in order for the statements to work in CockroachDB.
Subqueries in SET
statements
It is not currently possible to use a subquery in a SET
or SET CLUSTER SETTING
statement. For example:
> SET application_name = (SELECT 'a' || 'b');
ERROR: invalid value for parameter "application_name": "(SELECT 'a' || 'b')"
SQLSTATE: 22023
DETAIL: subqueries are not allowed in SET
Enterprise BACKUP
does not capture database/table/column comments
The COMMENT ON
statement associates comments to databases, tables, or columns. However, the internal table (system.comments
) in which these comments are stored is not captured by a BACKUP
of a table or database.
As a workaround, take a cluster backup instead, as the system.comments
table is included in cluster backups.
DB Console may become inaccessible for secure clusters
Accessing the DB Console for a secure cluster now requires login information (i.e., username and password). This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the DB Console.
AS OF SYSTEM TIME
in SELECT
statements
AS OF SYSTEM TIME
can only be used in a top-level SELECT
statement. That is, we do not support statements like INSERT INTO t SELECT * FROM t2 AS OF SYSTEM TIME <time>
or two subselects in the same statement with differing AS OF SYSTEM TIME
arguments.
Large index keys can impair performance
The use of tables with very large primary or secondary index keys (>32KB) can result in excessive memory usage. Specifically, if the primary or secondary index key is larger than 32KB the default indexing scheme for storage engine SSTables breaks down and causes the index to be excessively large. The index is pinned in memory by default for performance.
To work around this issue, we recommend limiting the size of primary and secondary keys to 4KB, which you must account for manually. Note that most columns are 8B (exceptions being STRING
and JSON
), which still allows for very complex key structures.
Using LIKE...ESCAPE
in WHERE
and HAVING
constraints
CockroachDB tries to optimize most comparisons operators in WHERE
and HAVING
clauses into constraints on SQL indexes by only accessing selected rows. This is done for LIKE
clauses when a common prefix for all selected rows can be determined in the search pattern (e.g., ... LIKE 'Joe%'
). However, this optimization is not yet available if the ESCAPE
keyword is also used.
Ordering tables by JSONB
/JSON
-typed columns
CockroachDB does not currently key-encode JSON values. As a result, tables cannot be ordered by JSONB
/JSON
-typed columns.
Current sequence value not checked when updating min/max value
Altering the minimum or maximum value of a series does not check the current value of a series. This means that it is possible to silently set the maximum to a value less than, or a minimum value greater than, the current value.
Using default_int_size
session variable in batch of statements
When setting the default_int_size
session variable in a batch of statements such as SET default_int_size='int4'; SELECT 1::IN
, the default_int_size
variable will not take affect until the next statement. This happens because statement parsing takes place asynchronously from statement execution.
As a workaround, set default_int_size
via your database driver, or ensure that SET default_int_size
is in its own statement.
COPY
syntax not supported by CockroachDB
CockroachDB does not yet support the following COPY
syntax:
COPY ... TO
. To copy data from a CockroachDB cluster to a file, use anEXPORT
statement.Various unsupported
COPY
options (FORMAT
,FREEZE
, etc.)COPY ... FROM ... WHERE <expr>
Import with a high amount of disk contention
IMPORT
can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the kv.bulk_io_write.max_rate
cluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:
> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
Placeholders in PARTITION BY
When defining a table partition, either during table creation or table alteration, it is not possible to use placeholders in the PARTITION BY
clause.
Dropping a single partition
CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can repartition the table.
Adding a column with sequence-based DEFAULT
values
It is currently not possible to add a column to a table when the column uses a sequence as the DEFAULT
value, for example:
> CREATE TABLE t (x INT);
> INSERT INTO t(x) VALUES (1), (2), (3);
> CREATE SEQUENCE s;
> ALTER TABLE t ADD COLUMN y INT DEFAULT nextval('s');
ERROR: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context
SQLSTATE: 0A000
Available capacity metric in the DB Console
If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is not recommended in production), you must explicitly set the store size per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.
Schema changes within transactions
Within a single transaction:
- You can run schema changes inside the same transaction as a
CREATE TABLE
statement. For more information, see Run schema changes inside a transaction withCREATE TABLE
. However, aCREATE TABLE
statement containingFOREIGN KEY
clauses cannot be followed by statements that reference the new table. - Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed.
DROP COLUMN
can result in data loss if one of the other schema changes in the transaction fails or is canceled. To work around this, move theDROP COLUMN
statement to its own explicit transaction or run it in a single statement outside the existing transaction.
If a schema change within a transaction fails, manual intervention may be needed to determine which statement has failed. After determining which schema change(s) failed, you can then retry the schema change.
Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed
Schema change DDL statements that run inside a multi-statement transaction with non-DDL statements can fail at COMMIT
time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.
If such a failure occurs, CockroachDB will emit a CockroachDB-specific error code, XXA00
, and the following error message:
transaction committed but schema change aborted with error: <description of error>
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
If you must execute schema change DDL statements inside a multi-statement transaction, we strongly recommend checking for this error code and handling it appropriately every time you execute such transactions.
This error will occur in various scenarios, including but not limited to:
- Creating a unique index fails because values aren't unique.
- The evaluation of a computed value fails.
- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.
To see an example of this error, start by creating the following table.
CREATE TABLE T(x INT);
INSERT INTO T(x) VALUES (1), (2), (3);
Then, enter the following multi-statement transaction, which will trigger the error.
BEGIN;
ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
INSERT INTO T(x) VALUES (3);
COMMIT;
pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
In this example, the INSERT
statement committed, but the ALTER TABLE
statement adding a UNIQUE
constraint failed. We can verify this by looking at the data in table t
and seeing that the additional non-unique value 3
was successfully inserted.
SELECT * FROM t;
x
+---+
1
2
3
3
(4 rows)
Schema changes between executions of prepared statements
When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a SELECT *
clause will result in an error:
CREATE TABLE users (id INT PRIMARY KEY);
PREPARE prep1 AS SELECT * FROM users;
ALTER TABLE users ADD COLUMN name STRING;
INSERT INTO users VALUES (1, 'Max Roach');
EXECUTE prep1;
ERROR: cached plan must not change result type
SQLSTATE: 0A000
It's therefore recommended to explicitly list result columns instead of using SELECT *
in prepared statements, when possible.
Size limits on statement input from SQL clients
CockroachDB imposes a hard limit of 16MiB on the data input for a single statement passed to CockroachDB from a client (including the SQL shell). We do not recommend attempting to execute statements from clients with large input.
Using \|
to perform a large input in the SQL shell
In the built-in SQL shell, using the \|
operator to perform a large number of inputs from a file can cause the server to close the connection. This is because \|
sends the entire file as a single query to the server, which can exceed the upper bound on the size of a packet the server can accept from any client (16MB).
As a workaround, execute the file from the command line with cat data.sql | cockroach sql
instead of from within the interactive shell.
New values generated by DEFAULT
expressions during ALTER TABLE ADD COLUMN
When executing an ALTER TABLE ADD COLUMN
statement with a DEFAULT
expression, new values generated:
- use the default search path regardless of the search path configured in the current session via
SET SEARCH_PATH
. - use the UTC time zone regardless of the time zone configured in the current session via
SET TIME ZONE
. - have no default database regardless of the default database configured in the current session via
SET DATABASE
, so you must specify the database of any tables they reference. - use the transaction timestamp for the
statement_timestamp()
function regardless of the time at which theALTER
statement was issued.
Load-based lease rebalancing in uneven latency deployments
When nodes are started with the --locality
flag, CockroachDB attempts to place the replica lease holder (the replica that client requests are forwarded to) on the node closest to the source of the request. This means as client requests move geographically, so too does the replica lease holder.
However, you might see increased latency caused by a consistently high rate of lease transfers between datacenters in the following case:
- Your cluster runs in datacenters which are very different distances away from each other.
- Each node was started with a single tier of
--locality
, e.g.,--locality=datacenter=a
. - Most client requests get sent to a single datacenter because that's where all your application traffic is.
To detect if this is happening, open the DB Console, select the Queues dashboard, hover over the Replication Queue graph, and check the Leases Transferred / second data point. If the value is consistently larger than 0, you should consider stopping and restarting each node with additional tiers of locality to improve request latency.
For example, let's say that latency is 10ms from nodes in datacenter A to nodes in datacenter B but is 100ms from nodes in datacenter A to nodes in datacenter C. To ensure A's and B's relative proximity is factored into lease holder rebalancing, you could restart the nodes in datacenter A and B with a common region, --locality=region=foo,datacenter=a
and --locality=region=foo,datacenter=b
, while restarting nodes in datacenter C with a different region, --locality=region=bar,datacenter=c
.
Overload resolution for collated strings
Many string operations are not properly overloaded for collated strings, for example:
> SELECT 'string1' || 'string2';
?column?
------------------
string1string2
(1 row)
> SELECT ('string1' collate en) || ('string2' collate en);
pq: unsupported binary operator: <collatedstring{en}> || <collatedstring{en}>
Max size of a single column family
When creating or updating a row, if the combined size of all values in a single column family exceeds the max range size (512 MiB by default) for the table, the operation may fail, or cluster performance may suffer.
As a workaround, you can either manually split a table's columns into multiple column families, or you can create a table-specific zone configuration with an increased max range size.
Simultaneous client connections and running queries on a single node
When a node has both a high number of client connections and running queries, the node may crash due to memory exhaustion. This is due to CockroachDB not accurately limiting the number of clients and queries based on the amount of available RAM on the node.
To prevent memory exhaustion, monitor each node's memory usage and ensure there is some margin between maximum CockroachDB memory usage and available system RAM. For more details about memory usage in CockroachDB, see this blog post.
New in v22.1:
To control the maximum number of non-superuser (root
user or other admin
role) connections a gateway node can have open at one time, use the server.max_connections_per_gateway
cluster setting. If a new non-superuser connection would exceed this limit, the error message "sorry, too many clients already"
is returned, along with error code 53300
.
This may be useful in addition to your memory monitoring.
Privileges for DELETE
and UPDATE
Every DELETE
or UPDATE
statement constructs a SELECT
statement, even when no WHERE
clause is involved. As a result, the user executing DELETE
or UPDATE
requires both the DELETE
and SELECT
or UPDATE
and SELECT
privileges on the table.
ROLLBACK TO SAVEPOINT
in high-priority transactions containing DDL
Transactions with priority HIGH
that contain DDL and ROLLBACK TO SAVEPOINT
are not supported, as they could result in a deadlock. For example:
> BEGIN PRIORITY HIGH; SAVEPOINT s; CREATE TABLE t(x INT); ROLLBACK TO SAVEPOINT s;
ERROR: unimplemented: cannot use ROLLBACK TO SAVEPOINT in a HIGH PRIORITY transaction containing DDL
SQLSTATE: 0A000
HINT: You have attempted to use a feature that is not yet implemented.
See: https://github.com/cockroachdb/cockroach/issues/46414
CockroachDB does not test for all connection failure scenarios
CockroachDB servers rely on the network to report when a TCP connection fails. In most scenarios when a connection fails, the network immediately reports a connection failure, resulting in a Connection refused
error.
However, if there is no host at the target IP address, or if a firewall rule blocks traffic to the target address and port, a TCP handshake can linger while the client network stack waits for a TCP packet in response to network requests. To work around this kind of scenario, we recommend the following:
- When migrating a node to a new machine, keep the server listening at the previous IP address until the cluster has completed the migration.
- Configure any active network firewalls to allow node-to-node traffic.
- Verify that orchestration tools (e.g., Kubernetes) are configured to use the correct network connection information.
Some column-dropping schema changes do not roll back properly
Some schema changes that drop columns cannot be rolled back properly.
In some cases, the rollback will succeed, but the column data might be partially or totally missing, or stale due to the asynchronous nature of the schema change.
In other cases, the rollback will fail in such a way that will never be cleaned up properly, leaving the table descriptor in a state where no other schema changes can be run successfully.
To reduce the chance that a column drop will roll back incorrectly:
Perform column drops in transactions separate from other schema changes. This ensures that other schema change failures will not cause the column drop to be rolled back.
Drop all constraints (including unique indexes) on the column in a separate transaction, before dropping the column.
Drop any default values or computed expressions on a column before attempting to drop the column. This prevents conflicts between constraints and default/computed values during a column drop rollback.
If you think a rollback of a column-dropping schema change has occurred, check the jobs table. Schema changes with an error prefaced by cannot be reverted, manual cleanup may be required
might require manual intervention.
Disk-spilling on joins with JSON
columns
If the execution of a join query exceeds the limit set for memory-buffering operations (i.e., the value set for the sql.distsql.temp_storage.workmem
cluster setting), CockroachDB will spill the intermediate results of computation to disk. If the join operation spills to disk, and at least one of the equality columns is of type JSON
, CockroachDB returns the error unable to encode table key: *tree.DJSON
. If the memory limit is not reached, then the query will be processed without error.
Remove a UNIQUE
index created as part of CREATE TABLE
UNIQUE
indexes created as part of a CREATE TABLE
statement cannot be removed without using CASCADE
. Unique indexes created with CREATE INDEX
do not have this limitation.