This is my config for how I deployed my postgres cluster using CNPG. This is a fairly barebones implementation seeing as all I wanted was the postgres instances that can survive a pod going down without the service going interrupted for a while. You should for sure read the docs to see all the options you can customize
An operator mananges our instances. I didn’t customize the operator and just used all the default settings
helm repo add cnpg https://cloudnative-pg.github.io/charts helm repo update helm upgrade --install cnpg --create-namespace cnpg-system cnpg/cloudnative-pg
Once the cluster is set up, all you need to do is deploy a cluster manifest
# Example of PostgreSQL cluster apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: tds-postgresql namespace: default spec: instances: 3 imageName: ghcr.io/cloudnative-pg/postgresql:14 storage: storageClass: nfs-client size: 10Gi monitoring: enablePodMonitor: true enableSuperuserAccess: true resources: requests: memory: "1Gi" limits: memory: "1Gi"
This will deploy a postgres cluster with 3 instances, one will be read-write, the other 2 read only. If the read-write instance goes down, the operator will promote one of 2 read only instances. This will also deploy postgres 14 and create an empty database named app. All the credentials will be stored as a secret credential