This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic.
Requirements
You must have SSH access (key pairs/SSH login) to each machine with root or sudo privileges. This is necessary for distributing binaries and starting CockroachDB.
Recommendations
If you plan to use CockroachDB in production, we recommend using a secure cluster instead. Using an insecure cluster comes with risks:
- Your cluster is open to any client that can access any node's IP addresses.
- Any user, even
root
, can log in without providing a password. - Any user, connecting as
root
, can read or write any data in your cluster. - There is no network encryption or authentication, and thus no confidentiality.
For guidance on cluster topology, clock synchronization, and file descriptor limits, see Recommended Production Settings.
All instances running CockroachDB should be members of the same Security Group.
Decide how you want to access your Admin UI:
- Only from specific IP addresses, which requires you to set firewall rules to allow communication on port
8080
(documented on this page) - Using an SSH tunnel, which requires you to use
--http-host=localhost
when starting your nodes
- Only from specific IP addresses, which requires you to set firewall rules to allow communication on port
Step 1. Configure your network
CockroachDB requires TCP communication on two ports:
26257
for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes8080
for exposing your Admin UI
You can create these rules using Security Groups' Inbound Rules.
Inter-node and load balancer-node communication
Field | Recommended Value |
---|---|
Type | Custom TCP Rule |
Protocol | TCP |
Port Range | 26257 |
Source | The name of your security group (e.g., sg-07ab277a) |
Admin UI
Field | Recommended Value |
---|---|
Type | Custom TCP Rule |
Protocol | TCP |
Port Range | 8080 |
Source | Your network's IP ranges |
Application data
Field | Recommended Value |
---|---|
Type | Custom TCP Rules |
Protocol | TCP |
Port Range | 26257 |
Source | Your application's IP ranges |
Step 2. Create instances
Create an instance for each node you plan to have in your cluster. We recommend:
- Running at least 3 CockroachDB nodes to ensure survivability.
- Selecting the same continent for all of your instances for best performance.
Step 3. Set up load balancing
Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:
Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).
Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.
AWS offers fully-managed load balancing to distribute traffic between instances.
- Add AWS load balancing. Be sure to:
- Set forwarding rules to route TCP traffic from the load balancer's port 26257 to port 26257 on the node Droplets.
- Configure health checks to use HTTP port 8080 and path
/health
.
- Note the provisioned IP Address for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster.
Step 4. Start the first node
SSH to your instance:
$ ssh -i <path to AWS .pem> <username>@<node1 external IP address>
Install the latest CockroachDB binary:
# Get the latest CockroachDB tarball. $ curl https://binaries.cockroachdb.com/cockroach-v1.0.7.linux-amd64.tgz # Extract the binary. $ tar -xzf cockroach-v1.0.7.linux-amd64.tgz \ --strip=1 cockroach-v1.0.7.linux-amd64/cockroach # Move the binary. $ sudo mv cockroach /usr/local/bin/
Start a new CockroachDB cluster with a single node, which will communicate with other nodes on its internal IP address:
$ cockroach start --insecure \ --background
Step 5. Add nodes to the cluster
At this point, your cluster is live and operational but contains only a single node. Next, scale your cluster by setting up additional nodes that will join the cluster.
SSH to another instance:
$ ssh -i <path to AWS .pem> <username>@<additional node external IP address>
Install CockroachDB from our latest binary:
# Get the latest CockroachDB tarball. $ curl https://binaries.cockroachdb.com/cockroach-v1.0.7.linux-amd64.tgz # Extract the binary. $ tar -xzf cockroach-v1.0.7.linux-amd64.tgz \ --strip=1 cockroach-v1.0.7.linux-amd64/cockroach # Move the binary. $ sudo mv cockroach /usr/local/bin/
Start a new node that joins the cluster using the first node's internal IP address:
$ cockroach start --insecure \ --background \ --join=<node1 internal IP address>:26257
Repeat these steps for each instance you want to use as a node.
Step 6. Test your cluster
CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.
To test this, use the built-in SQL client as follows:
SSH to your first node:
$ ssh -i <path to AWS .pem> <username>@<node1 external IP address>
Launch the built-in SQL client and create a database:
$ cockroach sql --insecure
> CREATE DATABASE insecurenodetest;
In another terminal window, SSH to another node:
$ ssh -i <path to AWS .pem> <username>@<node3 external IP address>
Launch the built-in SQL client:
$ cockroach sql --insecure
View the cluster's databases, which will include
insecurenodetest
:> SHOW DATABASES;
+--------------------+ | Database | +--------------------+ | crdb_internal | | information_schema | | insecurenodetest | | pg_catalog | | system | +--------------------+ (5 rows)
Use CTRL-D, CTRL-C, or
\q
to exit the SQL shell.
Step 7. Test load balancing
The AWS load balancer created in step 3 can serve as the client gateway to the cluster. Instead of connecting directly to a CockroachDB node, clients can connect to the load balancer, which will then redirect the connection to a CockroachDB node.
To test this, install CockroachDB locally and use the built-in SQL client as follows:
Install CockroachDB on your local machine, if it's not there already.
Launch the built-in SQL client, with the
--host
flag set to the load balancer's IP address:$ cockroach sql --insecure \ --host=<load balancer IP address> \ --port=26257
View the cluster's databases:
> SHOW DATABASES;
+--------------------+ | Database | +--------------------+ | crdb_internal | | information_schema | | insecurenodetest | | pg_catalog | | system | +--------------------+ (5 rows)
As you can see, the load balancer redirected the query to one of the CockroachDB nodes.
Check which node you were redirected to:
> SELECT node_id FROM crdb_internal.node_build_info LIMIT 1;
+---------+ | node_id | +---------+ | 3 | +---------+ (1 row)
Use CTRL-D, CTRL-C, or
\q
to exit the SQL shell.
Step 8. Monitor the cluster
View your cluster's Admin UI by going to http://<any node's external IP address>:8080
.
On this page, verify that the cluster is running as expected:
Click View nodes list on the right to ensure that all of your nodes successfully joined the cluster.
Click the Databases tab on the left to verify that
insecurenodetest
is listed.
Step 9. Use the database
Now that your deployment is working, you can:
- Implement your data model.
- Create users and grant them privileges.
- Connect your application. Be sure to connect your application to the AWS load balancer, not to a CockroachDB node.