Skip to content

Using GCP cluster provider

GCP service account must have the Editor, Secret Manager Admin, Kubernetes Engine Admin roles to be able to provision and destroy GCP GKE clusters.

Before provisioning the Kubernetes cluster, add override the configuration file to scope deps for the target Kubernetes cluster.

controlPlane:
  spec:
    version: "v1.30.5"

machinePools:
  app:
    enabled: true
    managed:
      spec:
        # MachineType is the name of a Google Compute Engine
        # (https://cloud.google.com/compute/docs/machine-types).
        # If unspecified, the default machine type is `e2-medium`.
        machineType: "e2-medium"
        management:
          # AutoUpgrade specifies whether node auto-upgrade is enabled for the node
          # pool. If enabled, node auto-upgrade helps keep the nodes in your node pool
          # up to date with the latest release version of Kubernetes.
          autoUpgrade: true
        # MaxPodsPerNode is constraint enforced on the max num of pods per node.
    replicas: 1
# ...

Using the example above and the example from the cluster-deps repository you can add the required number of machine pools depending on the requirements for distribution into individual roles.

For the GCP provider, before launching the actual provisioning of the cluster, RMK will perform the following preliminary steps:

To start provisioning a Kubernetes cluster, run the commands:

rmk cluster capi provision

When the cluster is ready, RMK automatically switches the Kubernetes context to the newly created cluster.

To destroy a Kubernetes cluster, run the command:

rmk cluster capi destroy

After the cluster is destroyed, RMK will delete the previously created Cloud NAT (if this resource is no longer used by other clusters in the same region) together with the context for the target Kubernetes cluster.


Last update: February 17, 2025