This page explains how to configure Kubernetes cluster resources such as memory, CPU, and storage.
These settings override the defaults used when deploying CockroachDB on Kubernetes.
All kubectl
steps should be performed in the namespace where you installed the Operator. By default, this is cockroach-operator-system
.
If you deployed CockroachDB on Red Hat OpenShift, substitute kubectl
with oc
in the following commands.
On a production cluster, the resources you allocate to CockroachDB should be proportionate to your machine types and workload. We recommend that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster.
Run kubectl describe nodes
to see the available resources on the instances that you have provisioned.
Memory and CPU
You can set the CPU and memory resources allocated to the CockroachDB container on each pod.
1 CPU in Kubernetes is equivalent to 1 vCPU or 1 hyperthread. For best practices on provisioning CPU and memory for CockroachDB, see the Production Checklist.
Specify CPU and memory values in resources.requests
and resources.limits
in the Operator's custom resource, which is used to deploy the cluster:
spec:
resources:
requests:
cpu: "4"
memory: "16Gi"
limits:
cpu: "4"
memory: "16Gi"
Apply the new settings to the cluster:
$ kubectl apply -f example.yaml
Specify CPU and memory values in resources.requests
and resources.limits
in the StatefulSet manifest you used to deploy the cluster:
spec:
template:
containers:
- name: cockroachdb
resources:
requests:
cpu: "4"
memory: "16Gi"
limits:
cpu: "4"
memory: "16Gi"
Apply the new settings to the cluster:
$ kubectl apply -f {statefulset-manifest}.yaml
Specify CPU and memory values in resources.requests
and resources.limits
in the custom values file you created when deploying the cluster:
statefulset:
resources:
limits:
cpu: "4"
memory: "16Gi"
requests:
cpu: "4"
memory: "16Gi"
Apply the custom values to override the default Helm chart values:
$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb
We recommend using identical values for resources.requests
and resources.limits
. When setting the new values, note that not all of a pod's resources will be available to the CockroachDB container. This is because a fraction of the CPU and memory is reserved for Kubernetes.
If no resource limits are specified, the pods will be able to consume the maximum available CPUs and memory. However, to avoid overallocating resources when another memory-intensive workload is on the same instance, always set resource requests and limits explicitly.
For more information on how Kubernetes handles resources, see the Kubernetes documentation.
Cache and SQL memory size
Each CockroachDB node reserves a portion of its available memory for its cache and for storing temporary data for SQL queries. For more information on these settings, see the Production Checklist.
Our Kubernetes manifests dynamically set cache size and SQL memory size each to 1/4 (the recommended fraction) of the available memory, which depends on the memory request and limit you specified for your configuration. If you want to customize these values, set them explicitly.
Specify cache
and maxSQLMemory
in the Operator's custom resource, which is used to deploy the cluster:
spec:
cache: "4Gi"
maxSQLMemory: "4Gi"
Apply the new settings to the cluster:
$ kubectl apply -f example.yaml
Specifying these values is equivalent to using the --cache
and --max-sql-memory
flags with cockroach start
.
Cache and SQL memory size
Each CockroachDB node reserves a portion of its available memory for its cache and for storing temporary data for SQL queries. For more information on these settings, see the Production Checklist.
Our Kubernetes manifests dynamically set cache size and SQL memory size each to 1/4 (the recommended fraction) of the available memory, which depends on the memory request and limit you specified for your configuration. If you want to customize these values, set them explicitly.
Specify cache
and maxSQLMemory
in the custom values file you created when deploying the cluster:
conf:
cache: "4Gi"
max-sql-memory: "4Gi"
Apply the custom values to override the default Helm chart values:
$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb
Persistent storage
When you start your cluster, Kubernetes dynamically provisions and mounts a persistent volume into each pod. For more information on persistent volumes, see the Kubernetes documentation.
The storage capacity of each volume is set in pvc.spec.resources
in the Operator's custom resource, which is used to deploy the cluster:
spec:
dataStore:
pvc:
spec:
resources:
limits:
storage: "60Gi"
requests:
storage: "60Gi"
The storage capacity of each volume is initially set in volumeClaimTemplates.spec.resources
in the StatefulSet manifest you used to deploy the cluster:
volumeClaimTemplates:
spec:
resources:
requests:
storage: 100Gi
The storage capacity of each volume is initially set in the Helm chart's values file:
persistentVolume:
size: 100Gi
You should provision an appropriate amount of disk storage for your workload. For recommendations on this, see the Production Checklist.
Expand disk size
If you discover that you need more capacity, you can expand the persistent volumes on a running cluster. Increasing disk size is often beneficial for CockroachDB performance.
Specify a new volume size in resources.requests
and resources.limits
in the Operator's custom resource, which is used to deploy the cluster:
spec:
dataStore:
pvc:
spec:
resources:
limits:
storage: "100Gi"
requests:
storage: "100Gi"
Apply the new settings to the cluster:
$ kubectl apply -f example.yaml
The Operator updates the StatefulSet and triggers a rolling restart of the pods with the new storage capacity.
To verify that the storage capacity has been updated, run kubectl get pvc
to view the persistent volume claims (PVCs). It will take a few minutes before the PVCs are completely updated.
You can expand certain types of persistent volumes (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims.
These steps assume you followed the tutorial Deploy CockroachDB on Kubernetes.
Get the persistent volume claims for the volumes:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
In order to expand a persistent volume claim,
AllowVolumeExpansion
in its storage class must betrue
. Examine the storage class:$ kubectl describe storageclass standard
Name: standard IsDefaultClass: Yes Annotations: storageclass.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/gce-pd Parameters: type=pd-standard AllowVolumeExpansion: False MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
If necessary, edit the storage class:
$ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
storageclass.storage.k8s.io/standard patched
Edit one of the persistent volume claims to request more space:
Note:The requested
storage
value must be larger than the previous value. You cannot use this method to decrease the disk size.$ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
persistentvolumeclaim/datadir-cockroachdb-0 patched
Check the capacity of the persistent volume claim:
$ kubectl get pvc datadir-cockroachdb-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
If the PVC capacity has not changed, this may be because
AllowVolumeExpansion
was initially set tofalse
or because the volume has a file system that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.Tip:Running
kubectl get pv
will display the persistent volumes with their requested capacity and not their actual capacity. This can be misleading, so it's best to usekubectl get pvc
.Examine the persistent volume claim. If the volume has a file system, you will see a
FileSystemResizePending
condition with an accompanying message:$ kubectl describe pvc datadir-cockroachdb-0
Waiting for user to (re-)start a pod to finish file system resize of volume on node.
Delete the corresponding pod to restart it:
$ kubectl delete pod cockroachdb-0
The
FileSystemResizePending
condition and message will be removed.View the updated persistent volume claim:
$ kubectl get pvc datadir-cockroachdb-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.
You can expand certain types of persistent volumes (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims.
These steps assume you followed the tutorial Deploy CockroachDB on Kubernetes.
Get the persistent volume claims for the volumes:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
In order to expand a persistent volume claim,
AllowVolumeExpansion
in its storage class must betrue
. Examine the storage class:$ kubectl describe storageclass standard
Name: standard IsDefaultClass: Yes Annotations: storageclass.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/gce-pd Parameters: type=pd-standard AllowVolumeExpansion: False MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
If necessary, edit the storage class:
$ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
storageclass.storage.k8s.io/standard patched
Edit one of the persistent volume claims to request more space:
Note:The requested
storage
value must be larger than the previous value. You cannot use this method to decrease the disk size.$ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched
Check the capacity of the persistent volume claim:
$ kubectl get pvc datadir-my-release-cockroachdb-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
If the PVC capacity has not changed, this may be because
AllowVolumeExpansion
was initially set tofalse
or because the volume has a file system that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.Tip:Running
kubectl get pv
will display the persistent volumes with their requested capacity and not their actual capacity. This can be misleading, so it's best to usekubectl get pvc
.Examine the persistent volume claim. If the volume has a file system, you will see a
FileSystemResizePending
condition with an accompanying message:$ kubectl describe pvc datadir-my-release-cockroachdb-0
Waiting for user to (re-)start a pod to finish file system resize of volume on node.
Delete the corresponding pod to restart it:
$ kubectl delete pod my-release-cockroachdb-0
The
FileSystemResizePending
condition and message will be removed.View the updated persistent volume claim:
$ kubectl get pvc datadir-my-release-cockroachdb-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.
Network ports
The Operator separates network traffic into three ports:
Protocol | Default | Description | Custom Resource Field |
---|---|---|---|
gRPC | 26258 | Used for node connections | grpcPort |
HTTP | 8080 | Used to access the DB Console | httpPort |
SQL | 26257 | Used for SQL shell access | sqlPort |
Specify alternate port numbers in the Operator's custom resource (for example, to match the default port 5432
on PostgreSQL):
spec:
sqlPort: 5432
Apply the new settings to the cluster:
$ kubectl apply -f example.yaml
The Operator updates the StatefulSet and triggers a rolling restart of the pods with the new port settings.
Currently, only the pods are updated with new ports. To connect to the cluster, you need to ensure that the public
service is also updated to use the new port. You can do this by deleting the service with kubectl delete service {cluster-name}-public
. When service is recreated by the Operator, it will use the new port. This is a known limitation that will be fixed in an Operator update.
Ingress
You can configure an Ingress object to expose an internal HTTP or SQL ClusterIP
service through a hostname.
In order to use the Ingress resource, your cluster must be running an Ingress controller for load balancing. This is not handled by the Operator and must be deployed separately.
Specify Ingress objects in ingress.ui
(HTTP) or ingress.sql
(SQL) in the Operator's custom resource, which is used to deploy the cluster:
spec:
ingress:
ui:
ingressClassName: nginx
annotations:
key: value
host: ui.example.com
sql:
ingressClassName: nginx
annotations:
key: value
host: sql.example.com
ingressClassName
specifies theIngressClass
of the Ingress controller. This example uses the nginx controller.The
host
must be made publicly accessible. For example, create a route in Amazon Route 53, or add an entry to/etc/hosts
that maps the IP address of the Ingress controller to the hostname.Note:Multiple hosts can be mapped to the same Ingress controller IP.
TCP connections for SQL clients must be enabled for the Ingress controller. For an example, see the nginx documentation.
Note:Changing the SQL Ingress
host
on a running deployment will cause a rolling restart of the cluster, due to new node certificates being generated for the SQL host.
The custom resource definition details the fields supported by the Operator.