CockroachDB's BACKUP
statement allows you to create full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp.
You can backup a full cluster, which includes:
- Relevant system tables
- All databases
- All tables (which automatically includes their indexes)
- All views
- All scheduled jobs
You can also back up:
- An individual database, which includes all of its tables and views
An individual table, which includes its indexes and views
BACKUP
only backs up entire tables; it does not support backing up subsets of a table.
Because CockroachDB is designed with high fault tolerance, these backups are designed primarily for disaster recovery (i.e., if your cluster loses a majority of its nodes) through RESTORE
. Isolated issues (such as small-scale node outages) do not require any intervention.
The BACKUP ... TO
and RESTORE ... FROM
syntax is deprecated as of v22.1 and will be removed in a future release.
We recommend using the BACKUP ... INTO {collectionURI}
syntax, which creates or adds to a backup collection in your storage location. For restoring backups, we recommend using RESTORE FROM {backup} IN {collectionURI}
with {backup}
being LATEST
or a specific subdirectory.
For guidance on the syntax for backups and restores, see the BACKUP
and RESTORE
examples.
Considerations
- Core users can only take full backups. To use the other backup features, you need an Enterprise license. You can also use CockroachDB Dedicated, which runs full backups daily and incremental backups hourly.
BACKUP
is a blocking statement. To run a backup job asynchronously, use theDETACHED
option. See the options below.- Backups will export Enterprise license keys during a full cluster backup. When you restore a full cluster with an Enterprise license, it will restore the Enterprise license of the cluster you are restoring from.
- Interleaving data is disabled in v21.1 by default, and will be permanently removed from CockroachDB in a future release. CockroachDB versions v21.2 and later will not be able to read or restore backups that include interleaved data. To backup interleaved data in v21.1, a
BACKUP
statement must include theINCLUDE_DEPRECATED_INTERLEAVES
option. - Zone configurations present on the destination cluster prior to a restore will be overwritten during a cluster restore with the zone configurations from the backed-up cluster. If there were no customized zone configurations on the cluster when the backup was taken, then after the restore the destination cluster will use the zone configuration from the
RANGE DEFAULT
configuration. - You cannot restore a backup of a multi-region database into a single-region database.
To view the contents of an Enterprise backup created with the BACKUP
statement, use SHOW BACKUP
.
Required privileges
- Full cluster backups can only be run by members of the
admin
role. By default, theroot
user belongs to theadmin
role. - For all other backups, the user must have read access on all objects being backed up. Database and table backups require
SELECT
privileges. Backups of user-defined schemas, or backups containing user-defined types, requireUSAGE
privileges. BACKUP
requires full read and write (including delete and overwrite) permissions to its target destination.
Destination privileges
The destination file URL does not require the ADMIN
role in the following scenarios:
- S3 and GS using
SPECIFIED
(and notIMPLICIT
) credentials. Azure is alwaysSPECIFIED
by default. - Userfile
The destination file URL does require the ADMIN
role in the following scenarios:
- S3 or GS using
IMPLICIT
credentials - Use of a custom endpoint on S3
- Nodelocal
We recommend using cloud storage for bulk operations.
Synopsis
Parameters
Parameter | Description |
---|---|
targets |
Back up the listed targets. |
subdirectory |
The name of the specific backup (e.g., 2021/03/23-213101.37 ) in the collection to which you want to add an incremental backup. To view available backup subdirectories, use SHOW BACKUPS IN destination . If the backup subdirectory is not provided, a full backup will be created in the collection using a date-based naming scheme (i.e., <year>/<month>/<day>-<timestamp> ).Warning: If you use an arbitrary STRING as the subdirectory, a new full backup will be created, but it will never be shown in SHOW BACKUPS IN . We do not recommend using arbitrary strings as subdirectory names. |
LATEST |
Append an incremental backup to the latest completed full backup's subdirectory. |
destination |
The URL where you want to store the backup. For information about this URL structure, see Backup File URLs. |
timestamp |
Back up data as it existed as of timestamp . The timestamp must be more recent than your cluster's last garbage collection (which defaults to occur every 25 hours, but is configurable per table). |
backup_options |
Control the backup behavior with a comma-separated list of these options. |
Targets
Target | Description |
---|---|
N/A | Backup the cluster. For an example of a full cluster backup, see Backup a cluster. |
DATABASE {database_name} [, ...] |
The name of the database(s) you want to back up (i.e., create backups of all tables and views in the database). For an example of backing up a database, see Backup a database. |
TABLE {table_name} [, ...] |
The name of the table(s) or view(s) you want to back up. For an example of backing up a table or view, see Backup a table or view. |
Options
Option | Value | Description |
---|---|---|
revision_history |
N/A | Create a backup with full revision history, which records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. |
encryption_passphrase |
STRING |
The passphrase used to encrypt the files (BACKUP manifest and data files) that the BACKUP statement generates. This same passphrase is needed to decrypt the file when it is used to restore and to list the contents of the backup when using SHOW BACKUP . There is no practical limit on the length of the passphrase. |
DETACHED |
N/A | When a backup runs in DETACHED mode, it will execute asynchronously. The job ID will be returned after the backup job creation completes. Note that with DETACHED specified, further job information and the job completion status will not be returned. For more on the differences between the returned job data, see the example below. To check on the job status, use the SHOW JOBS statement. To run a backup within a transaction, use the DETACHED option. |
kms |
STRING |
The key management service (KMS) URI (or a comma-separated list of URIs) used to encrypt the files (BACKUP manifest and data files) that the BACKUP statement generates. This same KMS URI is needed to decrypt the file when it is used to restore and to list the contents of the backup when using SHOW BACKUP . Currently, only AWS KMS is supported. |
INCLUDE_DEPRECATED_INTERLEAVES |
N/A | New in v21.1: Include interleaved tables and indexes in the backup. If this option is not specified, and the cluster, database, or table being backed up includes interleaved data, CockroachDB will return an error. |
Backup file URLs
CockroachDB uses the URL provided to construct a secure API call to the service you specify. The URL structure depends on the type of file storage you are using. For more information, see the following:
Use Cloud Storage for Bulk Operations
Note:HTTP storage is not supported for
BACKUP
andRESTORE
.
Functional details
Object dependencies
Dependent objects must be backed up at the same time as the objects they depend on.
Object | Depends On |
---|---|
Table with foreign key constraints | The table it REFERENCES ; however, this dependency can be removed during the restore. |
Table with a sequence | The sequence it uses; however, this dependency can be removed during the restore. |
Views | The tables used in the view's SELECT statement. |
Interleaved tables | The parent table in the interleaved hierarchy. |
Users and privileges
The system.users
table stores your users and their passwords. To restore your users and privilege grants, do a cluster backup and restore the cluster to a fresh cluster with no user data. You can also backup the system.users
table, and then use this procedure.
Performance
The BACKUP
process minimizes its impact to the cluster's performance by distributing work to all nodes. Each node backs up only a specific subset of the data it stores (those for which it serves writes; more details about this architectural concept forthcoming), with no two nodes backing up the same data.
BACKUP
, like any read, cannot export a range if the range contains an unresolved intent. While you typically will want bulk, background jobs like BACKUP
to have as little impact on your foreground traffic as possible, it's more important for backups to actually complete (which maintains your recovery point objective (RPO)). Unlike a normal read transaction that will block until any uncommitted writes it encounters are resolved, BACKUP
will block only for a configurable duration before invoking priority to ensure it can complete on-time.
We recommend always starting backups with a specific timestamp at least 10 seconds in the past. For example:
> BACKUP...AS OF SYSTEM TIME '-10s';
This improves performance by decreasing the likelihood that the BACKUP
will be retried because it contends with other statements/transactions. However, because AS OF SYSTEM TIME
returns historical data, your reads might be stale. Taking backups with AS OF SYSTEM TIME '-10s'
is a good best practice to reduce the number of still-running transactions you may encounter, since the backup will take priority and will force still-running transactions to restart after the backup is finished.
BACKUP
will initially ask individual ranges to backup but to skip if they encounter an intent. Any range that is skipped is placed at the end of the queue. When BACKUP
has completed its initial pass and is revisiting ranges, it will ask any range that did not resolve within the given time limit (default 1 minute) to attempt to resolve any intents that it encounters and to not skip. Additionally, the backup's transaction priority is then set to high
, which causes other transactions to abort until the intents are resolved and the backup is finished.
Viewing and controlling backups jobs
After CockroachDB successfully initiates a backup, it registers the backup as a job, and you can do the following:
Action | SQL Statement |
---|---|
View the backup status | SHOW JOBS |
Pause the backup | PAUSE JOB |
Resume the backup | RESUME JOB |
Cancel the backup | CANCEL JOB |
You can also visit the Jobs page of the DB Console to view job details. The BACKUP
statement will return when the backup is finished or if it encounters an error.
The presence of the BACKUP MANIFEST
file in the backup subdirectory is an indicator that the backup job completed successfully.
Examples
Per our guidance in the Performance section, we recommend starting backups from a time at least 10 seconds in the past using AS OF SYSTEM TIME
.
The following examples make use of:
- Amazon S3 connection strings. For guidance on connecting to other storage options or using other authentication parameters instead, read Use Cloud Storage for Bulk Operations.
- The default
AUTH=specified
parameter. For guidance on usingAUTH=implicit
authentication with Amazon S3 buckets instead, read Use Cloud Storage for Bulk Operations — Authentication.
New in v21.1:
The syntax BACKUP ... INTO
adds a backup to a collection within the backup destination. The path to the backup is created using a date-based naming scheme. Versions of CockroachDB prior to v21.1 used the syntax BACKUP ... TO
to backup directly to a specific operator-chosen destination, rather than picking a date-based path. The BACKUP ... TO
syntax will be deprecated in future releases. For more information on this soon-to-be deprecated syntax, see the docs for v20.2 or earlier.
Backup a cluster
To take a full backup of a cluster:
> BACKUP INTO \
's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
Backup a database
To take a full backup of a single database:
> BACKUP DATABASE bank \
INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
To take a full backup of multiple databases:
> BACKUP DATABASE bank, employees \
INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
Backup a table or view
To take a full backup of a single table or view:
> BACKUP bank.customers \
INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
To take a full backup of multiple tables:
> BACKUP bank.customers, bank.accounts \
INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
CockroachDB does not currently support BACKUP
s of individual tables that are part of multi-region databases. For example, you cannot backup a GLOBAL
or REGIONAL
table individually.
To work around this limitation, you must back up the database or the entire cluster.
Specify a subdirectory for backups
To store the backup in a specific subdirectory in the storage location:
BACKUP DATABASE bank INTO 'subdirectory' IN 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
Create incremental backups
If you backup to a destination already containing a full backup, an incremental backup will be appended to the full backup's path with a date-based name (e.g., 20210324
):
> BACKUP INTO LATEST IN \
's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s';
Run a backup asynchronously
Use the DETACHED
option to execute the backup job asynchronously:
> BACKUP INTO \
's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \
AS OF SYSTEM TIME '-10s'
WITH DETACHED;
The job ID is returned after the backup job creation completes:
job_id
----------------------
592786066399264769
(1 row)
Without the DETACHED
option, BACKUP
will block the SQL connection until the job completes. Once finished, the job status and more detailed job data is returned:
job_id | status | fraction_completed | rows | index_entries | bytes
---------------------+-----------+--------------------+------+---------------+--------
652471804772712449 | succeeded | 1 | 50 | 0 | 4911
(1 row)
Advanced examples
For examples of advanced BACKUP
and RESTORE
use cases, see:
- Incremental backups with a specified destination
- Backup with revision history and point-in-time restore
- Locality-aware backup and restore
- Encrypted backup and restore
- Restore into a different database
- Remove the foreign key before restore
- Restoring users from
system.users
backup
Known limitations
Using interleaved tables in backups
Interleaved tables are now disabled by default in v21.1. Your backup will fail if your cluster includes interleaved data. To include interleaved tables, use the INCLUDE_DEPRECATED_INTERLEAVES
option. Note that, interleaved tables will be permanently removed from CockroachDB in a future release, so you will be unable to RESTORE
backups containing interleaved tables to any future versions.
BACKUP
of multi-region tables
CockroachDB does not currently support BACKUP
s of individual tables that are part of multi-region databases. For example, you cannot backup a GLOBAL
or REGIONAL
table individually.
To work around this limitation, you must back up the database or the entire cluster.