Once you’ve installed CockroachDB, it’s simple to start a secure multi-node cluster locally, using TLS certificates to encrypt network communication.
Running multiple nodes on a single host is useful for testing out CockroachDB, but it's not recommended for production deployments. To run a physically distributed cluster in production, see Manual Deployment or Orchestrated Deployment.
Before you begin
Make sure you have already installed CockroachDB.
Step 1. Create security certificates
You can use either cockroach cert
commands or openssl
commands to generate security certificates. This section features the cockroach cert
commands.
Create a directory for certificates and a safe directory for the CA key:
$ mkdir certs my-safe-directory
If using the default certificate directory (
${HOME}/.cockroach-certs
), make sure it is empty.Create the CA (Certificate Authority) certificate and key pair:
$ cockroach cert create-ca \ --certs-dir=certs \ --ca-key=my-safe-directory/ca.key
Create the client certificate and key, in this case for the
root
user:$ cockroach cert create-client \ root \ --certs-dir=certs \ --ca-key=my-safe-directory/ca.key
These files,
client.root.crt
andclient.root.key
, will be used to secure communication between the built-in SQL shell and the cluster (see step 4).Create the node certificate and key:
$ cockroach cert create-node \ localhost \ $(hostname) \ --certs-dir=certs \ --ca-key=my-safe-directory/ca.key
These files,
node.crt
andnode.key
, will be used to secure communication between nodes. Typically, you would generate these separately for each node since each node has unique addresses; in this case, however, since all nodes will be running locally, you need to generate only one node certificate and key.
Step 2. Start the first node
$ cockroach start \
--certs-dir=certs \
--listen-addr=localhost
CockroachDB node starting at 2018-09-13 01:25:57.878119479 +0000 UTC (took 0.3s)
build: CCL v2.1.11 @ 2020-01-29 00:00:00
webui: https://localhost:8080
sql: postgresql://root@ROACHs-MBP:26257?sslcert=%2FUsers%2F...
client flags: cockroach <client cmd> --host=localhost:26257 --certs-dir=certs
logs: cockroach/cockroach-data/logs
temp dir: cockroach-data/cockroach-temp998550693
external I/O path: cockroach-data/extern
store[0]: path=cockroach-data
status: initialized new cluster
clusterID: 2711b3fa-43b3-4353-9a23-20c9fb3372aa
nodeID: 1
This command starts a node in secure mode, accepting most cockroach start
defaults.
- The
--certs-dir
directory points to the directory holding certificates and keys. - Since this is a purely local cluster,
--listen-addr=localhost
tells the node to listens only onlocalhost
, with default ports used for internal and client traffic (26257
) and for HTTP requests from the Admin UI (8080
). - Node data is stored in the
cockroach-data
directory. - The standard output gives you helpful details such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients.
Step 3. Add nodes to the cluster
At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's automatic replication, rebalancing, and fault tolerance capabilities. This step helps you simulate a real deployment locally.
In a new terminal, add the second node:
$ cockroach start \
--certs-dir=certs \
--store=node2 \
--listen-addr=localhost:26258 \
--http-addr=localhost:8081 \
--join=localhost:26257
In a new terminal, add the third node:
$ cockroach start \
--certs-dir=certs \
--store=node3 \
--listen-addr=localhost:26259 \
--http-addr=localhost:8082 \
--join=localhost:26257
The main difference in these commands is that you use the --join
flag to connect the new nodes to the cluster, specifying the address and port of the first node, in this case localhost:26257
. Since you're running all nodes on the same machine, you also set the --store
, --listen-addr
, and --http-addr
flags to locations and ports not used by other nodes, but in a real deployment, with each node on a different machine, the defaults would suffice.
Step 4. Test the cluster
Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, open a new terminal and connect the built-in SQL client to node 1:
cockroach
binary, so nothing extra is needed.$ cockroach sql \
--certs-dir=certs \
--host=localhost:26257
Run some basic CockroachDB SQL statements:
> CREATE DATABASE bank;
> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
> INSERT INTO bank.accounts VALUES (1, 1000.50);
> SELECT * FROM bank.accounts;
id | balance
+----+---------+
1 | 1000.50
(1 row)
Exit the SQL shell on node 1:
> \q
Then connect the SQL shell to node 2, this time specifying the node's non-default port:
$ cockroach sql \
--certs-dir=certs \
--host=localhost:26258
26257
, and so you wouldn't need to set the --port
flag.Now run the same SELECT
query:
> SELECT * FROM bank.accounts;
id | balance
+----+---------+
1 | 1000.50
(1 row)
As you can see, node 1 and node 2 behaved identically as SQL gateways.
Finally, create a user with a password, which you will need in the next step to access the Admin UI:
> CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
Exit the SQL shell on node 2:
> \q
Step 5. Monitor the cluster
Access the Admin UI for your cluster by pointing a browser to http://localhost:8080, or to the address in the admin
field in the standard output of any node on startup. Note that your browser will consider the CockroachDB-created certificate invalid; you’ll need to click through a warning message to get to the UI.
Log in with the username and password created in the Test the cluster step. Then click Metrics on the left-hand navigation bar.
As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the Replicas per Node graph and hover over the line:
The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default).
Step 6. Stop the cluster
Once you're done with your test cluster, switch to the terminal running the first node and press CTRL-C to stop the node.
At this point, with 2 nodes still online, the cluster remains operational because a majority of replicas are available. To verify that the cluster has tolerated this "failure", connect the built-in SQL shell to nodes 2 or 3. You can do this in the same terminal or in a new terminal.
$ cockroach sql \
--certs-dir=certs \
--host=localhost:26258
> SELECT * FROM bank.accounts;
id | balance
+----+---------+
1 | 1000.50
(1 row)
Exit the SQL shell:
> \q
Now stop nodes 2 and 3 by switching to their terminals and pressing CTRL-C.
If you do not plan to restart the cluster, you may want to remove the nodes' data stores:
$ rm -rf cockroach-data node2 node3
Step 7. Restart the cluster
If you decide to use the cluster for further testing, you'll need to restart at least 2 of your 3 nodes from the directories containing the nodes' data stores.
Restart the first node from the parent directory of cockroach-data
:
$ cockroach start \
--certs-dir=certs \
--listen-addr=localhost
In a new terminal, restart the second node from the parent directory of node2
:
$ cockroach start \
--certs-dir=certs \
--store=node2 \
--listen-addr=localhost:26258 \
--http-addr=localhost:8081 \
--join=localhost:26257
In a new terminal, restart the third node from the parent directory of node3
:
$ cockroach start \
--certs-dir=certs \
--store=node3 \
--listen-addr=localhost:26259 \
--http-addr=localhost:8082 \
--join=localhost:26257
What's next?
- Learn more about CockroachDB SQL and the built-in SQL client
- Install the client driver for your preferred language
- Learn how to use Client Connection Parameters to connect your app to your secure cluster
- Build an app with CockroachDB
- Explore core CockroachDB features like automatic replication, rebalancing, and fault tolerance