We recommend running ParadeDB Enterprise, not Community, with Kubernetes in
production to maximize uptime. See overview.
Prerequisites
This guide assumes you have installed Helm and have a Kubernetes cluster running v1.25+. For local testing, we recommend Minikube.Install the Prometheus Stack
The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with:Install the CloudNativePG Operator
Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the--set commands.
Start a ParadeDB CNPG Cluster
Create avalues.yaml and configure it to your requirements. Here is a basic example:
If you are using ParadeDB Enterprise,
instances should be set to a number
greater than 1 for high
availability.ParadeDB Enterprise
Connect to the Cluster
The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be:psql with:
Connect to the Grafana Dashboard
To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost:You can then access the Grafana dasbhoard at localhost:3000 using the credentialsadminas username andprom-operator as password. These default credentials are defined in the [kube-stack-config.yaml](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the values.yamlfile in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your ownvalues.yaml` file. A more detailed guide on monitoring the cluster can be found in the CloudNativePG documentation.