Managing your cloud ecosystems: Keeping your setup consistent

Planning and managing your cloud ecosystem and environments is critical for reducing production downtime and maintaining a functioning workload. In the “Managing your cloud ecosystems” blog series, we cover different strategies for ensuring that your setup functions smoothly with minimal downtime.

Previously, we covered keeping your workload running when updating worker nodes, managing major, minor and patch updates, and migrating workers to a new OS version. Now, we’ll put it all together by keeping components consistent across clusters and environments.

Example setup

We’ll be analyzing an example setup that includes the following four IBM Cloud Kubernetes Service VPC clusters:

  • One development cluster
  • One QA test cluster
  • Two production clusters (one in Dallas and one in London)

You can view a list of clusters in your account by running the ibmcloud ks cluster ls command:

Name ID State Created Workers Location Version Resource Group Name Provider
vpc-dev  bs34jt0biqdvesc normal 2 years ago 6 Dallas 1.25.10_1545 default vpc-gen2
vpc-qa c1rg7o0vnsob07 normal 2 years ago 6 Dallas 1.25.10_1545 default vpc-gen2
vpc-prod-dal cfqqjkfd0gi2lrku normal 4 months ago 6 Dallas 1.25.10_1545 default vpc-gen2
vpc-prod-lon broe71f2c59ilho normal 4 months ago 6 London 1.25.10_1545 default vpc-gen2

Scroll to view full table

Each cluster has six worker nodes. Below is a list of the worker nodes running on the dev cluster. You can list a cluster’s worker nodes by running ibmcloud ks workers –cluster :

ID Primary IP Flavor State Status Zone Version
kube-bstb34vesccv0-vpciksussou-default-008708f   10.240.64.63    bx2.4×16   normal ready us-south-2   1.25.10_1548
kube-bstb34jt0bcv0-vpciksussou-default-00872b7   10.240.128.66   bx2.4×16   normal ready us-south-3   1.25.10_1548
kube-bstb34jesccv0-vpciksussou-default-008745a   10.240.0.129    bx2.4×16   normal ready us-south-1   1.25.10_1548
kube-bstb3dvesccv0-vpciksussou-ubuntu2-008712d   10.240.64.64    bx2.4×16   normal ready us-south-2   1.25.10_1548
kube-bstb34jt0ccv0-vpciksussou-ubuntu2-00873f7   10.240.0.128    bx2.4×16   normal ready us-south-3   1.25.10_1548
kube-bstbt0vesccv0-vpciksussou-ubuntu2-00875a7   10.240.128.67   bx2.4×16   normal ready us-south-1   1.25.10_1548

Scroll to view full table

Keeping your setup consistent

The example cluster and worker node outputs include several component characteristics that should stay consistent across all clusters and environments.

For clusters

  • The Provider type indicates whether the cluster’s infrastructure is VPC or Classic. For optimal workload function, ensure that your clusters have the same provider across all your environments. After a cluster is created, you cannot change its provider type. If one of your cluster’s providers does not match, create a new one to replace it and migrate the workload to the new cluster. Note that for VPC clusters, the specific VPC that the cluster exists in might be different across environments. In this scenario, make sure that the VPC clusters are configured the same way to maintain as much consistency as possible.
  • The cluster Version indicates the Kubernetes version that the cluster master runs on—such as 1.25.10_1545. It’s important that your clusters run on the same version. Master patch versions—such as _1545—are automatically applied to the cluster (unless you opt out of automatic updates). Major and minor releases—such as 1.25 or 1.26—must be applied manually. If your clusters run on different versions, follow the information in our previous blog installment to update them. For more information on cluster versions, see Update Types in the Kubernetes service documentation.

For worker nodes

Note: Before you make any updates or changes to your worker nodes, plan your updates to ensure that your workload continues uninhibited. Worker node updates can cause disruptions if they are not planned beforehand. For more information, review our previous blog post.

  • The worker Version is the most recent worker node patch update that has been applied to your worker nodes. Patch updates include important security and Kubernetes upstream changes and should be applied regularly. See our previous blog post on version updates for more information on upgrading your worker node version.
  • The worker node Flavor, or machine type, determines the machine’s specifications for CPU, memory and storage. If your worker nodes have different flavors, replace them with new worker nodes that run on the same flavor. For more information, see Updating flavor (machine types) in the Kubernetes service docs.
  • The Zone indicates the location where the worker node is deployed. For high availability and maximum resiliency, make sure you have worker nodes spread across three zones within the same region. In this VPC example, there are two worker nodes in each of the us-south-1, us-south-2 and us-south-3 zones. Your worker node zones should be configured the same way in each cluster. If you need to change the zone configuration of your worker nodes, you can create a new worker pool with new worker nodes. Then, delete the old worker pool. For more information, see Adding worker nodes in VPC clusters or Adding worker nodes in Classic clusters.
  • Additionally, the Operating System that your worker nodes run on should be consistent throughout your cluster. Note that the operating system is specified for the worker pool rather than the individual worker nodes, and it is not included in the previous outputs. To see the operating system, run ibmcloud ks worker-pools -cluster . For more information on migrating to a new operating system, see our previous blog post.

By keeping your cluster and worker node configurations consistent throughout your setup, you reduce workload disruptions and downtime. When making any changes to your setup, keep in mind the recommendations in our previous blog posts about updates and migrations across environments.

Wrap up

This concludes our blog series on managing your cloud ecosystems to reduce downtime. If you haven’t already, check out the other topics in the series:

Learn more about IBM Cloud Kubernetes Service clusters

Software Engineering Lead – IKS/ROKS/Satellite