This page has instructions for getting data into CockroachDB with various programming languages, using the INSERT
SQL statement.
Before you begin
Before reading this page, do the following:
- Create a CockroachDB Serverless cluster or start a local cluster.
- Install a Driver or ORM Framework.
- Connect to the database.
Your application should use a retry loop to handle transaction errors that can occur under contention.
Insert rows
When inserting multiple rows, a single multi-row insert statement is faster than multiple single-row statements.
CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);
INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);
For more information about how to use the built-in SQL client, see the cockroach sql
reference docs.
// 'db' is an open database connection
// Insert two rows into the "accounts" table.
if _, err := db.Exec(
"INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
log.Fatal(err)
}
For complete examples, see:
// ds is an org.postgresql.ds.PGSimpleDataSource
try (Connection connection = ds.getConnection()) {
connection.setAutoCommit(false);
PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)");
pstmt.setInt(1, 1);
pstmt.setInt(2, 1000);
pstmt.addBatch();
pstmt.executeBatch();
connection.commit();
} catch (SQLException e) {
System.out.printf("sql state = [%s]\ncause = [%s]\nmessage = [%s]\n",
e.getSQLState(), e.getCause(), e.getMessage());
}
For complete examples, see:
# conn is a psycopg2 connection
with conn.cursor() as cur:
cur.execute('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
conn.commit()
For complete examples, see:
Bulk insert
If you need to get a lot of data into a CockroachDB cluster quickly, use the IMPORT
statement instead of sending SQL INSERT
s from application code. It will be much faster because it bypasses the SQL layer altogether and writes directly to the data store using low-level commands. For instructions, see the Migration Overview.
Limit the size of rows
To help you avoid failures arising from misbehaving applications that bloat the size of rows, you can specify the behavior when a row or individual column family larger than a specified size is written to the database. Use the cluster settings sql.guardrails.max_row_size_log
to discover large rows and sql.guardrails.max_row_size_err
to reject large rows.
When you write a row that exceeds sql.guardrails.max_row_size_log
:
INSERT
,UPSERT
,UPDATE
,CREATE TABLE AS
,CREATE INDEX
,ALTER TABLE
,ALTER INDEX
,IMPORT
, orRESTORE
statements will log aLargeRow
to theSQL_PERF
channel.SELECT
,DELETE
,TRUNCATE
, andDROP
are not affected.
When you write a row that exceeds sql.guardrails.max_row_size_err
:
INSERT
,UPSERT
, andUPDATE
statements will fail with a code54000 (program_limit_exceeded)
error.CREATE TABLE AS
,CREATE INDEX
,ALTER TABLE
,ALTER INDEX
,IMPORT
, andRESTORE
statements will log aLargeRowInternal
event to theSQL_INTERNAL_PERF
channel.SELECT
,DELETE
,TRUNCATE
, andDROP
are not affected.
You cannot update existing rows that violate the limit unless the update shrinks the size of the
row below the limit. You can select, delete, alter, back up, and restore such rows. We
recommend using the accompanying setting sql.guardrails.max_row_size_log
in conjunction with
SELECT pg_column_size()
queries to detect and fix any existing large rows before lowering
sql.guardrails.max_row_size_err
.
See also
Reference information related to this task:
- Migration Overview
IMPORT
- Import performance
INSERT
UPSERT
- Transaction Contention
- Multi-row DML best practices
- Insert Multiple Rows
Other common tasks: