This page shows you how to reproduce CockroachDB TPC-C performance benchmarking results. Across all scales, CockroachDB can process tpmC (new order transactions per minute) at near maximum efficiency. Start by choosing the scale you're interested in:
Workload | Cluster size | Warehouses | Data size |
---|---|---|---|
Local | 3 nodes on your laptop | 10 | 2 GB |
Local (multi-region) | 9 in-memory nodes on your laptop using cockroach demo |
10 | 2 GB |
Small | 3 nodes on c5d.4xlarge machines |
2500 | 200 GB |
Medium | 15 nodes on c5d.4xlarge machines |
13,000 | 1.04 TB |
Large | 81 nodes on c5d.9xlarge machines |
140,000 | 11.2 TB |
Before you begin
TPC-C provides the most realistic and objective measure for OLTP performance at various scale factors. Before you get started, consider reviewing what TPC-C is and how it is measured.
Make sure you have already installed CockroachDB.
Step 1. Start CockroachDB
The --insecure
flag used in this tutorial is intended for non-production testing only. To run CockroachDB in production, use a secure cluster instead.
Use the
cockroach start
command to start 3 nodes:$ cockroach start \ --insecure \ --store=tpcc-local1 \ --listen-addr=localhost:26257 \ --http-addr=localhost:8080 \ --join=localhost:26257,localhost:26258,localhost:26259 \ --background
$ cockroach start \ --insecure \ --store=tpcc-local2 \ --listen-addr=localhost:26258 \ --http-addr=localhost:8081 \ --join=localhost:26257,localhost:26258,localhost:26259 \ --background
$ cockroach start \ --insecure \ --store=tpcc-local3 \ --listen-addr=localhost:26259 \ --http-addr=localhost:8082 \ --join=localhost:26257,localhost:26258,localhost:26259 \ --background
Use the
cockroach init
command to perform a one-time initialization of the cluster:$ cockroach init \ --insecure \ --host=localhost:26257
Step 2. Import the TPC-C dataset
CockroachDB comes with a number of built-in workloads for simulating client traffic. This step features CockroachDB's version of the TPC-C workload.
Use cockroach workload
to load the initial schema and data:
$ cockroach workload fixtures import tpcc \
--warehouses=10 \
'postgresql://root@localhost:26257?sslmode=disable'
This will load 2 GB of data for 10 "warehouses".
Step 3. Run the benchmark
Run the workload for ten "warehouses" of data for ten minutes:
$ cockroach workload run tpcc \
--warehouses=10 \
--ramp=3m \
--duration=10m \
'postgresql://root@localhost:26257?sslmode=disable'
You'll see per-operation statistics every second:
Initializing 20 connections...
Initializing 100 workers and preparing statements...
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
1.0s 0 0.0 0.0 0.0 0.0 0.0 0.0 delivery
1.0s 0 0.0 0.0 0.0 0.0 0.0 0.0 newOrder
...
105.0s 0 0.0 0.2 0.0 0.0 0.0 0.0 delivery
105.0s 0 4.0 1.8 44.0 46.1 46.1 46.1 newOrder
105.0s 0 0.0 0.2 0.0 0.0 0.0 0.0 orderStatus
105.0s 0 1.0 2.0 14.7 14.7 14.7 14.7 payment
105.0s 0 0.0 0.2 0.0 0.0 0.0 0.0 stockLevel
...
For more tpcc
options, use cockroach workload run tpcc --help
. For details about other built-in load generators, use cockroach workload run --help
.
Step 4. Interpret the results
Once the workload
has finished running, you'll see a final output line:
_elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms)
300.0s 121.6 94.6% 41.0 39.8 54.5 71.3 96.5 130.0
You will also see some audit checks and latency statistics for each individual query. For this run, some of those checks might indicate that they were SKIPPED
due to insufficient data. For a more comprehensive test, run workload
for a longer duration (e.g., two hours). The tpmC
(new order transactions/minute) number is the headline number and efc
("efficiency") tells you how close CockroachDB gets to theoretical maximum tpmC
.
The TPC-C specification has p90 latency requirements in the order of seconds, but as you see here, CockroachDB far surpasses that requirement with p90 latencies in the tens of milliseconds.
Step 5. Clean up
When you're done with your test cluster, stop the nodes.
Get the process IDs of the nodes:
ps -ef | grep cockroach | grep -v grep
501 4482 1 0 2:41PM ttys000 0:09.78 cockroach start --insecure --store=tpcc-local1 --listen-addr=localhost:26257 --http-addr=localhost:8080 --join=localhost:26257,localhost:26258,localhost:26259 501 4497 1 0 2:41PM ttys000 0:08.54 cockroach start --insecure --store=tpcc-local2 --listen-addr=localhost:26258 --http-addr=localhost:8081 --join=localhost:26257,localhost:26258,localhost:26259 501 4503 1 0 2:41PM ttys000 0:08.54 cockroach start --insecure --store=tpcc-local3 --listen-addr=localhost:26259 --http-addr=localhost:8082 --join=localhost:26257,localhost:26258,localhost:26259
Gracefully shut down each node, specifying its process ID:
kill -TERM 4482
kill -TERM 4497
Note:For the last node, the shutdown process will take longer (about a minute) and will eventually stop the node. This is because, with only 1 of 3 nodes left, all ranges no longer have a majority of replicas available, and so the cluster is no longer operational.
kill -TERM 4503
To restart the cluster at a later time, run the same
cockroach start
commands as earlier from the directory containing the nodes' data stores.If you do not plan to restart the cluster, you may want to remove the nodes' data stores:
$ rm -rf tpcc-local1 tpcc-local2 tpcc-local3