To delete a large number of rows (i.e., tens of thousands of rows or more), we recommend iteratively deleting subsets of the rows that you want to delete, until all of the unwanted rows have been deleted. You can write a script to do this, or you can write a loop into your application.
This page provides guidance on batch deleting with the DELETE
query filter on an indexed column and on a non-indexed column. Filtering on an indexed column is both simpler to implement and more efficient, but adding an index to a table can slow down insertions to the table and may cause bottlenecks. Queries that filter on a non-indexed column must perform at least one full-table scan, a process that takes time proportional to the size of the entire table.
If you want to delete all of the rows in a table (and not just a large subset of the rows), use a TRUNCATE
statement.
Exercise caution when batch deleting rows from tables with foreign key constraints and explicit ON DELETE
foreign key actions. To preserve DELETE
performance on tables with foreign key actions, we recommend using smaller batch sizes, as additional rows updated or deleted due to ON DELETE
actions can make batch loops significantly slower.
Before you begin
Before reading this page, do the following:
- Create a CockroachDB Serverless cluster or start a local cluster.
-
For the example on this page, we use the
psycopg2
Python driver. Insert data that you now want to delete.
For the example on this page, we load a cluster with the
tpcc
database and data fromcockroach workload
.
Batch delete on an indexed column
For high-performance batch deletes, we recommending filtering the DELETE
query on an indexed column.
Having an indexed filtering column can make delete operations faster, but it might lead to bottlenecks in execution, especially if the filtering column is a timestamp. To reduce bottlenecks, we recommend using a hash-sharded index.
Each iteration of a batch-delete loop should execute a transaction containing a single DELETE
query. When writing this DELETE
query:
- Use a
WHERE
clause to filter on a column that identifies the unwanted rows. If the filtering column is not the primary key, the column should have a secondary index. Note that if the filtering column is not already indexed, it is not beneficial to add an index just to speed up batch deletes. Instead, consider batch deleting on non-indexed columns. - To ensure that rows are efficiently scanned in the
DELETE
query, add anORDER BY
clause on the filtering column. - Use a
LIMIT
clause to limit the number of rows to the desired batch size. To determine the optimal batch size, try out different batch sizes (1,000 rows, 10,000 rows, 100,000 rows, etc.) and monitor the change in performance. - Add a
RETURNING
clause to the end of the query that returns the filtering column values of the deleted rows. Then, using the values of the deleted rows, update the filter to match only the subset of remaining rows to delete. This narrows each query's scan to the fewest rows possible, and preserves the performance of the deletes over time. This pattern assumes that no new rows are generated that match on theDELETE
filter during the time that it takes to perform the delete.
For example, suppose that you want to delete all rows in the tpcc
new_order
table where no_w_id
is less than 5
, in batches of 5,000 rows. To do this, you can write a script that loops over batches of 5,000 rows, following the DELETE
query guidance provided above. Note that in this case, no_w_id
is the first column in the primary index, and, as a result, you do not need to create a secondary index on the column.
In Python, the script would look similar to the following:
#!/usr/bin/env python3
import psycopg2
import psycopg2.sql
import os
conn = psycopg2.connect(os.environ.get('DB_URI'))
filter = 4
lastrow = None
while True:
with conn:
with conn.cursor() as cur:
if lastrow:
filter = lastrow[0]
query = psycopg2.sql.SQL("DELETE FROM new_order WHERE no_w_id <= %s ORDER BY no_w_id DESC LIMIT 5000 RETURNING no_w_id")
cur.execute(query, (filter,))
print(cur.statusmessage)
if cur.rowcount == 0:
break
lastrow = cur.fetchone()
conn.close()
This script iteratively deletes rows in batches of 5,000, until all of the rows where no_w_id <= 4
are deleted. Note that at each iteration, the filter is updated to match a narrower subset of rows.
Batch delete on a non-indexed column
If you cannot index the column that identifies the unwanted rows, we recommend defining the batch loop to execute separate read and write operations at each iteration:
Execute a
SELECT
query that returns the primary key values for the rows that you want to delete. When writing theSELECT
query:- Use a
WHERE
clause that filters on the column identifying the rows. - Add an
AS OF SYSTEM TIME
clause to the end of the selection subquery, or run the selection query in a separate, read-only transaction withSET TRANSACTION AS OF SYSTEM TIME
. This helps to reduce transaction contention. - Use a
LIMIT
clause to limit the number of rows queried to a subset of the rows that you want to delete. To determine the optimalSELECT
batch size, try out different sizes (10,000 rows, 100,000 rows, 1,000,000 rows, etc.), and monitor the change in performance. Note that thisSELECT
batch size can be much larger than the batch size of rows that are deleted in the subsequentDELETE
query. - To ensure that rows are efficiently scanned in the subsequent
DELETE
query, include anORDER BY
clause on the primary key.
- Use a
Write a nested
DELETE
loop over the primary key values returned by theSELECT
query, in batches smaller than the initialSELECT
batch size. To determine the optimalDELETE
batch size, try out different sizes (1,000 rows, 10,000 rows, 100,000 rows, etc.), and monitor the change in performance. Where possible, we recommend executing eachDELETE
in a separate transaction.
For example, suppose that you want to delete all rows in the tpcc
history
table that are older than a month. You can create a script that loops over the data and deletes unwanted rows in batches, following the query guidance provided above.
In Python, the script would look similar to the following:
#!/usr/bin/env python3
import psycopg2
import os
import time
conn = psycopg2.connect(os.environ.get('DB_URI'))
while True:
with conn:
with conn.cursor() as cur:
cur.execute("SET TRANSACTION AS OF SYSTEM TIME '-5s'")
cur.execute("SELECT h_w_id, rowid FROM history WHERE h_date < current_date() - INTERVAL '1 MONTH' ORDER BY h_w_id, rowid LIMIT 20000")
pkvals = list(cur)
if not pkvals:
return
while pkvals:
batch = pkvals[:5000]
pkvals = pkvals[5000:]
with conn:
with conn.cursor() as cur:
cur.execute("DELETE FROM history WHERE (h_w_id, rowid) = ANY %s", (batch,))
print(cur.statusmessage)
del batch
del pkvals
time.sleep(5)
conn.close()
At each iteration, the selection query returns the primary key values of up to 20,000 rows of matching historical data from 5 seconds in the past, in a read-only transaction. Then, a nested loop iterates over the returned primary key values in smaller batches of 5,000 rows. At each iteration of the nested DELETE
loop, a batch of rows is deleted. After the nested DELETE
loop deletes all of the rows from the initial selection query, a time delay ensures that the next selection query reads historical data from the table after the last iteration's DELETE
final delete.
CockroachDB records the timestamp of each row created in a table in the crdb_internal_mvcc_timestamp
metadata column. In the absence of an explicit timestamp column in your table, you can use crdb_internal_mvcc_timestamp
to filter expired data.
crdb_internal_mvcc_timestamp
cannot be indexed. If you plan to use crdb_internal_mvcc_timestamp
as a filter for large deletes, you must follow the non-indexed column pattern.
Exercise caution when using crdb_internal_mvcc_timestamp
in production, as the column is subject to change without prior notice in new releases of CockroachDB. Instead, we recommend creating a column with an ON UPDATE
expression to avoid any conflicts due to internal changes to crdb_internal_mvcc_timestamp
.
Batch-delete "expired" data
CockroachDB does not support Time to Live (TTL) on table rows. To delete "expired" rows, we recommend automating a batch delete process using a job scheduler like cron
.
For example, suppose that every morning you want to delete all rows in the rides
table in the movr
database that are older than a month. To do this, you could write a Python script that batch-deletes rows based on the values of an indexed TIMESTAMPTZ
column, and then run the script with a daily cron
job.
To record the last day and time a row was updated, create a
TIMESTAMPTZ
column with anON UPDATE
expression:USE movr; ALTER TABLE rides ADD COLUMN last_updated TIMESTAMPTZ DEFAULT now() ON UPDATE now();
To improve
DELETE
performance, index theTIMESTAMPTZ
column. We recommend using a hash-sharded index to reduce bottlenecks, asTIMESTAMPTZ
values are sequentially stored in ranges:SET experimental_enable_hash_sharded_indexes=on; CREATE INDEX ON rides(last_updated) USING HASH WITH BUCKET_COUNT=8;
Write a script with a batch-delete loop, following the batch delete on an indexed column pattern:
#!/usr/bin/env python3 import psycopg2 import psycopg2.sql import os import datetime conn = psycopg2.connect(os.environ.get('DB_URI')) filter = datetime.datetime.utcnow() - datetime.timedelta(days=30) lastrow = None while True: with conn: with conn.cursor() as cur: if lastrow: filter = lastrow[0] cur.execute("DELETE FROM rides WHERE last_updated < %s ORDER BY last_updated DESC LIMIT 5000 RETURNING last_updated", [filter]) print(cur.statusmessage) if cur.rowcount == 0: break lastrow = cur.fetchone() conn.close()
Make the file executable:
$ chmod +x cleanup.py
Create a new
cron
job:$ crontab -e
30 10 * * * DB_URI='cockroachdb://user@host:26257/movr' cleanup.py >> ~/cron.log 2>&1
Saving the cron
file will install a new job that runs the cleanup.py
file every morning at 10:30 A.M., writing the results to the cron.log
file.
Delete all of the rows in a table
To delete all of the rows in a table, use a TRUNCATE
statement.
For example, to delete all rows in the tpcc
new_order
table, execute the following SQL statement:
TRUNCATE new_order;
You can execute the statement from a compatible SQL client (e.g., the CockroachDB SQL client), or in a script or application.
For example, in Python, using the psycopg2
client driver:
#!/usr/bin/env python3
import psycopg2
import os
conn = psycopg2.connect(os.environ.get('DB_URI'))
with conn:
with conn.cursor() as cur:
cur.execute("TRUNCATE new_order")
For detailed reference documentation on the TRUNCATE
statement, including additional examples, see the TRUNCATE
syntax page.