1 - Architecture

This page is an overview of open cluster management.

Overview

Open Cluster Management (OCM) is a powerful, modular, extensible platform for Kubernetes multi-cluster orchestration. Learning from the past failing lesson of building Kubernetes federation systems in the Kubernetes community, in OCM we will be jumping out of the legacy centric, imperative architecture of Kubefed v2 and embracing the “hub-agent” architecture which is identical to the original pattern of “hub-kubelet” from Kubernetes. Hence, intuitively in OCM our multi-cluster control plane is modeled as a “Hub” and on the other hand each of the clusters being managed by the “Hub” will be a “Klusterlet” which is obviously inspired from the original name of “kubelet”. Here’s a more detailed clarification of the two models we will be frequently using throughout the world of OCM:

  • Hub Cluster: Indicating the cluster that runs the multi-cluster control plane of OCM. Generally the hub cluster is supposed to be a light-weight Kubernetes cluster hosting merely a few fundamental controllers and services.

  • Klusterlet: Indicating the clusters that being managed by the hub cluster. Klusterlet might also be called “managed cluster” or “spoke cluster”. The klusterlet is supposed to actively pulling the latest prescriptions from the hub cluster and consistently reconciles the physical Kubernetes cluster to the expected state.

“Hub-spoke” architecture

Benefiting from the merit of “hub-spoke” architecture, in abstraction we are de-coupling most of the multi-cluster operations generally into (1) computation/decision and (2) execution, and the actual execution against the target cluster will be completely off-loaded into the managed cluster. The hub cluster won’t directly request against the real clusters, instead it just persists its prescriptions declaratively for each cluster, and the klusterlet will be actively pulling the prescriptions from the hub and doing the execution. Hence, the burden of the hub cluster will be greatly relieved because the hub cluster doesn’t need to either deal with flooding events from the managed clusters or be buried in sending requests against the clusters. Imagine in a world where there’s no kubelet in Kubernetes and its control plane is directly operating the container daemons, it will be extremely hard for a centric controller to manage a cluster of 5k+ nodes. Likewise, that’s how OCM trying to breach the bottleneck of scalability, by dividing and offloading the execution into separated agents. So it’s always feasible for a hub cluster to accept and manage thousand-ish clusters.

Each klusterlet will be working independently and autonomously, so they have a weak dependency to the availability of the hub cluster. If the hub goes down (e.g. during maintenance or network partition) the klusterlet or other OCM agents working in the managed cluster are supposed to keep actively managing the hosting cluster until it re-connects. Additionally if the hub cluster and the managed clusters are owned by different admins, it will be easier for the admin of the managed cluster to police the prescriptions from the hub control plane because the klusterlet is running as a “white-box” as a pod instance in the managed cluster. Upon any accident, the klusterlet admin can quickly cut off the connection with the hub cluster without shutting the whole multi-cluster control plane down.

Architecture diagram

The “hub-agent” architecture also minimized the requirements in the network for registering a new cluster to the hub. Any cluster that can reach the endpoint of the hub cluster will be able to be managed, even a random KinD sandbox cluster on your laptop. That is because the prescriptions are effectively pulled from the hub instead of pushing. In addition to that, OCM also provides a addon named “cluster-proxy” which automatically manages a reverse proxy tunnel for proactive access to the managed clusters by leveraging on the Kubernetes’ subproject konnectivity.

Modularity and extensibility

Not only OCM will bring you a fluent user-experience of managing a number of clusters on ease, but also it will be equally friendly to further customization or second-time development. Every functionality working in OCM is expected to be freely-pluggable by modularizing the atomic capability into separated building blocks, except for the mandatory core module named registration which is responsible for controlling the lifecycle of a managed controller and exporting the elementary ManagedCluster model.

Another good example surfacing our modularity will be the placement, a standalone module focusing at dynamically selecting the proper list of the managed clusters from the user’s prescription. You can build any advanced multi-cluster orchestration on the top of placement, e.g. multi-cluster workload re-balancing, multi-cluster helm charts replication, etc. On the other hand if you’re not satisfied by the current capacities from our placement module, you can quickly opt-out and replace it with your customized ones, and reach out to our community so that we can converge in the future if possible.


Concepts

Cluster registering: “double opt-in handshaking”

Practically the hub cluster and the managed cluster can be owned/maintained by different admins, so in OCM we clearly separated the roles and make the cluster registration require approval from the both sides defending from unwelcome requests. In terms of terminating the registration, the hub admin can kick out a registered cluster by denying the rotation of hub cluster’s certificate, on the other hand from the perspective of a managed cluster’s admin, he can either brutally deleting the agent instances or revoking the granted RBAC permissions for the agents. Note that the hub controller will be automatically preparing environment for the newly registered cluster and cleaning up neatly upon kicking a managed cluster.

Double opt-in handshaking

Cluster registration security model

Security model

The worker cluster admin can list and read any managed cluster’s CSR, but those CSR cannot be used to impersonate due to the fact that CSR only contains the certificate. The client authentication requires both the key and certificate. The key is stored in each managed cluster, and it will not be transmitted across the network.

The worker cluster admin cannot approve his or her own cluster registration by default. Two separate RBAC rules are needed to approve a cluster registration. The permission to approve the CSR and the permission to accept the managed cluster. Only the cluster admin on hub has both permissions and can accept the cluster registration request. The second accept permission is gated by a webhook.

Cluster namespace

Kubernetes has a native soft multi-tenancy isolation in the granularity of its namespace resources, so in OCM, for each of the managed cluster we will be provisioning a dedicated namespace for the managed cluster and grants sufficient RBAC permissions so that the klusterlet can persist some data in the hub cluster. This dedicated namespace is the “cluster namespace” which is majorly for saving the prescriptions from the hub. e.g. we can create ManifestWork in a cluster namespace in order to deploy some resources towards the corresponding cluster. Meanwhile, the cluster namespace can also be used to save the uploaded stats from the klusterlet e.g. the healthiness of an addon, etc.

Addons

Addon is a general concept for the optional, pluggable customization built over the extensibility from OCM. It can be a controller in the hub cluster, or just a customized agent in the managed cluster, or even the both collaborating in peers. The addons are expected to implement the ClusterManagementAddon or ManagedClusterAddOn API of which a detailed elaboration can be found here.


Building blocks

The following is a list of commonly-used modules/subprojects that you might be interested in the journey of OCM:

Registration

The core module of OCM manages the lifecycle of the managed clusters. The registration controller in the hub cluster can be intuitively compared to a broker that represents and manages the hub cluster in terms of cluster registration, while the registration agent working in the managed cluster is another broker that represents the managed cluster. After a successful registration, the registration controller and agent will also be consistently probing each other’s healthiness. i.e. the cluster heartbeats.

Work

The module for dispatching resources from the hub cluster to the managed clusters, which can be easily done by writing a ManifestWork resource into a cluster namespace. See more details about the API here.

Placement

Building custom advanced topology across the clusters by either grouping clusters via the labels or the cluster-claims. The placement module is completely decoupled from the execution, the output from placement will be merely a list of names of the matched clusters in the PlacementDecision API, so the consumer controller of the decision output can reactively discovery the topology or availability change from the managed clusters by simply list-watching the decision API.

Application lifecycle

The application lifecycle defines the processes that are used to manage application resources on your managed clusters. A multi-cluster application uses a Kubernetes specification, but with additional automation of the deployment and lifecycle management of resources to individual clusters. A multi-cluster application allows you to deploy resources on multiple clusters, while maintaining easy-to-reconcile service routes, as well as full control of Kubernetes resource updates for all aspects of the application.

Governance and risk

Governance and risk is the term used to define the processes that are used to manage security and compliance from the hub cluster. Ensure the security of your cluster with the extensible policy framework. After you configure a hub cluster and a managed cluster, you can create, modify and delete policies on the hub and apply policies to the managed clusters.

Registration operator

Automating the installation and upgrading of a few built-in modules in OCM. You can either deploy the operator standalone or delegate the registration operator to the operator lifecycle framework.

2 - ClusterClaim

What is ClusterClaim?

ClusterClaim is a cluster-scoped API available to users on a managed cluster. The ClusterClaim objects are collected from the managed cluster and saved into the status of the corresponding ManagedCluster object on the hub.

Usage

ClusterCaim is used to specify additional properties of the managed cluster like the clusterID, version, vendor and cloud provider. We defined some reserved ClusterClaims like id.k8s.io which is a unique identifier for the managed cluster.

In addition to the reserved ClusterClaims, users can also customize 20 ClusterClaims by default. The maximum count of customized ClusterClaims can be configured via the flag max-custom-cluster-claims of registration agent on the managed cluster.

The ClusterClaim with the label open-cluster-management.io/spoke-only will not be synced to the status of ManagedCluster.

Example

Here is a ClusterClaim example specifying a id.k8s.io:

apiVersion: cluster.open-cluster-management.io/v1alpha1
kind: ClusterClaim
metadata:
  name: id.k8s.io
spec:
  value: myCluster

After applying the ClusterClaim above to any managed cluster, the value of the ClusterClaim is reflected in the ManagedCluster on the hub cluster:

apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata: ...
spec: ...
status:
  clusterClaims:
    - name: id.k8s.io
      value: myCluster

3 - ManagedCluster

What is ManagedCluster?

ManagedCluster is a cluster scoped API in the hub cluster representing the registered or pending-for-acceptance Kubernetes clusters in OCM. The klusterlet agent working in the managed cluster is expected to actively maintain/refresh the status of the corresponding ManagedCluster resource on the hub cluster. On the other hand, removing the ManagedCluster from the hub cluster indicates the cluster is denied/exiled from the hub cluster. The following is the introduction of how the cluster registration lifecycle works under the hood:

Cluster registration and acceptance

Bootstrapping registration

Firstly, the cluster registration process should be initiated by the registration agent which requires a bootstrap kubeconfig e.g.:

apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-hub-kubeconfig
  namespace: open-cluster-management-agent
type: Opaque
data:
  kubeconfig: <base64-encoded kubeconfig>

A minimal RBAC permission required for the subject in the bootstrap kubeconfig will be:

  • CertificateSigningRequest’s “get”, “list”, “watch”, “create”, “update”.
  • ManagedCluster’s “get”, “list”, “create”, “update”

Note that ideally the bootstrap kubeconfig is supposed to live shortly (hour-ish) after signed by the hub cluster so that it won’t be abused by unwelcome clients.

Last but not least, you can always live an easier life by leveraging OCM’s command-line tool clusteradm to manage the whole registration process.

Approving registration

When we’re registering a new cluster into OCM, the registration agent will be starting by creating an unaccepted ManagedCluster into the hub cluster along with a temporary CertificateSigningRequest (CSR) resource. The cluster will be accepted by the hub control plane, if the following requirements is meet:

  • The CSR is approved and signed by any certificate provider setting filling .status.certificate with legit X.509 certificates.
  • The ManagedCluster resource is approved by setting .spec.hubAcceptsClient to true in the spec.

Note that the cluster approval process above can be done by one-line:

$ clusteradm accept --clusters <cluster name>

Upon the approval, the registration agent will observe the signed certificate and persist them as a local secret named “hub-kubeconfig-secret” (by default in the “open-cluster-management-agent” namespace) which will be mounted to the other fundamental components of klusterlet such as the work agent. In a word, if you can find your “hub-kubeconfig-secret” successfully present in your managed cluster, the cluster registration is all set!

Overall the registration process in OCM is called double opt-in mechanism, which means that a successful cluster registration requires both sides of approval and commitment from the hub cluster and the managed cluster. This will be especially useful when the hub cluster and managed clusters are operated by different admins or teams. In OCM, we assume the clusters are mutually untrusted in the beginning then set up the connection between them gracefully with permission and validity under control.

Note that the functionality mentioned above are all managed by OCM’s registration sub-project, which is the “root dependency” in the OCM world. It includes an agent in the managed cluster to register to the hub and a controller in the hub cluster to coordinate with the agent.

Cluster heartbeats and status

By default, the registration will be reporting and refreshing its healthiness state to the hub cluster on a one-minute basis, and that interval can be easily overridden by setting .spec.leaseDurationSeconds on the ManagedCluster.

In addition to that, a few commonly-used information will also be reflected in the status of the ManagedCluster, e.g.:

  status:
    version:
      kubernetes: v1.20.11
    allocatable:
      cpu: 11700m
      ephemeral-storage: "342068531454"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 17474228Ki
      pods: "192"
    capacity:
      cpu: "12"
      ephemeral-storage: 371168112Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 23777972Ki
      pods: "192"
    conditions: ...

Cluster taints and tolerations

To support filtering unhealthy/not-reporting clusters and keep workloads from being placed in unhealthy or unreachable clusters, we introduce the similar concept of taint/toleration in Kubernetes. It also allows user to add a customized taint to deselect a cluster from placement. This is useful when the user wants to set a cluster to maintenance mode and evict workload from this cluster.

In OCM, Taints and Tolerations work together to allow users to control the selection of managed clusters more flexibly.

Taints of ManagedClusters

Taints are properties of ManagedClusters, they allow a Placement to repel a set of ManagedClusters. A Taint includes the following fields:

  • Key (required). The taint key applied to a cluster. e.g. bar or foo.example.com/bar.
  • Value (optional). The taint value corresponding to the taint key.
  • Effect (required). The Effect of the taint on Placements that do not tolerate the taint. Valid effects are
    • NoSelect. It means Placements are not allowed to select a cluster unless they tolerate this taint. The cluster will be removed from the placement decision if it has already been selected by the Placement.
    • PreferNoSelect. It means the scheduler tries not to select the cluster, rather than prohibiting Placements from selecting the cluster entirely. (This is not implemented yet, currently clusters with effect PreferNoSelect will always be selected.)
    • NoSelectIfNew. It means Placements are not allowed to select the cluster unless: 1) they tolerate the taint; 2) they have already had the cluster in their cluster decisions;
  • TimeAdded (required). The time at which the taint was added. It is set automatically and the user should not to set/update its value.

Builtin taints to reflect the status of ManagedClusters

There are two builtin taints, which will be automatically added to ManagedClusters, according to their conditions.

  • cluster.open-cluster-management.io/unavailable. The taint is added to a ManagedCluster when it is not available. To be specific, the cluster has a condition ‘ManagedClusterConditionAvailable’ with status of ‘False’. The taint has the effect NoSelect and an empty value. Example,
    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
     name: cluster1
    spec:
     hubAcceptsClient: true
     taints:
       - effect: NoSelect
         key: cluster.open-cluster-management.io/unavailable
         timeAdded: '2022-02-21T08:11:54Z'
    
  • cluster.open-cluster-management.io/unreachable. The taint is added to a ManagedCluster when it is not reachable. To be specific,
      1. The cluster has no condition ‘ManagedClusterConditionAvailable’;
      1. Or the status of condition ‘ManagedClusterConditionAvailable’ is ‘Unknown’; The taint has the effect NoSelect and an empty value. Example,
    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
      name: cluster1
    spec:
      hubAcceptsClient: true
      taints:
        - effect: NoSelect
          key: cluster.open-cluster-management.io/unreachable
          timeAdded: '2022-02-21T08:11:06Z'
    

Tolerations of Placements

Tolerations are applied to Placements, and allow Placements to select ManagedClusters with matching taints. Refer to Placement Taints/Tolerations to see how it is used for cluster selection.

Cluster removal

A previously registered cluster can opt-out cutting off the connection from either hub cluster or managed cluster. This is helpful for tackling emergency problems in your OCM environment, e.g.:

  • When the hub cluster is overloaded, under emergency
  • When the managed cluster is intended to detach from OCM
  • When the hub cluster is found sending wrong orders to the managed cluster
  • When the managed cluster is spamming requests to the hub cluster

Unregister from hub cluster

A recommended way to unregister a managed cluster will flip the .spec.hubAcceptsClient bit back to false, which will be triggering the hub control plane to offload the managed cluster from effective management. Meanwhile, a permanent way to kick a managed cluster from the hub control plane is simply deleting its ManagedCluster resource.

$ kubectl delete managedcluster <cluster name>

This is also revoking the previously-granted RBAC permission for the managed cluster instantly in the background. If we hope to defer the rejection to the next time when the klusterlet agent is renewing its certificate, as a minimal operation we can remove the following RBAC rules from the cluster’s effective cluster role resource:

# ClusterRole: open-cluster-management:managedcluster:<cluster name>
# Removing the following RBAC rule to stop the certificate rotation.
- apiGroups:
    - register.open-cluster-management.io
  resources:
    - managedclusters/clientcertificates
  verbs:
    - renew

Unregister from the managed cluster

The admin of the managed cluster can disable the prescriptions from hub cluster by scaling the OCM klusterlet agents to 0. Or just permanently deleting the agent components from the managed cluster.

Managed Cluster’s certificate rotation

The certificates used by the agents from the managed cluster to talk to the hub control plane will be periodically rotated with an ephemeral and random identity. The following picture shows the automated certificate rotation works.

Registration Process

What’s next?

Furthermore, we can do advanced cluster matching/selecting within a managedclusterset using the placement module.

4 - ManagedClusterSet

API-CHANGE NOTE:

The ManagedClusterSet and ManagedClusterSetBinding API v1beta1 version will no longer be served in OCM v0.12.0.

  • Migrate manifests and API clients to use the ManagedClusterSet and ManagedClusterSetBinding API v1beta2 version, available since OCM v0.9.0.
  • All existing persisted objects are accessible via the new API.
  • Notable changes:
    • The default cluster selector type will be ExclusiveClusterSetLabel in v1beta2, and type LegacyClusterSetLabel in v1beta1 is removed.

What is ManagedClusterSet?

ManagedClusterSet is a cluster-scoped API in the hub cluster for grouping a few managed clusters into a “set” so that hub admin can operate these clusters altogether in a higher level. The concept is inspired by the enhancement from the Kubernetes SIG-Multicluster. Member clusters in the set are supposed to have common/similar attributes e.g. purpose of use, deployed regions, etc.

ManagedClusterSetBinding is a namespace-scoped API in the hub cluster to project a ManagedClusterSet into a certain namespace. Each ManagedClusterSet can be managed/administrated by different hub admins, and their RBAC permissions can also be isolated by binding the ManagedClusterSet to a “workspace namespace” in the hub cluster via ManagedClusterSetBinding.

Note that ManagedClusterSet and “workspace namespace” has an M*N relationship:

  • Bind multiple cluster sets to one workspace namespace indicates that the admin of that namespace can operate the member clusters from both sets.
  • Bind one cluster set to multiple workspace namespace indicates that the cluster set can be operated from all the bound namespaces at the same time.

The cluster set admin can flexibly operate the member clusters in the workspace namespace using Placement API, etc.

The following picture shows the hierarchies of how the cluster set works:

Clusterset

Operates ManagedClusterSet using clusteradm

Creating a ManagedClusterSet

Running the following command to create an example cluster set:

$ clusteradm create clusterset example-clusterset
$ clusteradm get clustersets
<ManagedClusterSet>
└── <default>
│   ├── <BoundNamespace>
│   ├── <Status> No ManagedCluster selected
└── <example-clusterset>
│   ├── <BoundNamespace>
│   ├── <Status> No ManagedCluster selected
└── <global>
    └── <BoundNamespace>
    └── <Status> 1 ManagedClusters selected

The newly created cluster set will be empty by default, so we can move on adding member clusters to the set.

Adding a ManagedCluster to a ManagedClusterSet

Running the following command to add a cluster to the set:

$ clusteradm clusterset set example-clusterset --clusters managed1
$ clusteradm get clustersets
<ManagedClusterSet>
└── <default>
│   ├── <BoundNamespace>
│   ├── <Status> No ManagedCluster selected
└── <example-clusterset>
│   ├── <BoundNamespace>
│   ├── <Status> 1 ManagedClusters selected
└── <global>
    └── <BoundNamespace>
    └── <Status> 1 ManagedClusters selected

Note that adding a cluster to a cluster set will require the admin to have “managedclustersets/join” access in the hub cluster.

Now the cluster set contains 1 valid cluster, and in order to operate that cluster set we are supposed to bind it to an existing namespace to make it a “workspace namespace”.

Binding the ManagedClusterSet to a workspace namespace

Running the following command to bind the cluster set to a namespace. Note that the namespace SHALL NOT be an existing “cluster namespace” (i.e. the namespace has the same name of a registered managed cluster).

Note that binding a cluster set to a namespace means that granting access from that namespace to its member clusters. And the bind process requires “managedclustersets/bind” access in the hub cluster which is clarified below.

$ clusteradm clusterset bind example-clusterset --namespace default
$ clusteradm get clustersets
<ManagedClusterSet>
└── <default>
│   ├── <BoundNamespace>
│   ├── <Status> No ManagedCluster selected
└── <example-clusterset>
│   ├── <Status> 1 ManagedClusters selected
│   ├── <BoundNamespace> default
└── <global>
    └── <BoundNamespace>
    └── <Status> 1 ManagedClusters selected

So far we successfully created a new cluster set containing 1 cluster and bind it a “workspace namespace”.

A glance at the “ManagedClusterSet” API

The ManagedClusterSet is a vanilla Kubernetes custom resource which can be checked by the command kubectl get managedclusterset <cluster set name> -o yaml:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: example-clusterset
spec:
  clusterSelector:
    selectorType: ExclusiveClusterSetLabel
status:
  conditions:
  - lastTransitionTime: "2022-02-21T09:24:38Z"
    message: 1 ManagedClusters selected
    reason: ClustersSelected
    status: "False"
    type: ClusterSetEmpty
apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: example-openshift-clusterset
spec:
  clusterSelector:
    labelSelector:
      matchLabels:
        vendor: OpenShift
    selectorType: LabelSelector
status:
  conditions:
  - lastTransitionTime: "2022-06-20T08:23:28Z"
    message: 1 ManagedClusters selected
    reason: ClustersSelected
    status: "False"
    type: ClusterSetEmpty

The ManagedClusterSetBinding can also be checked by the command kubectl get managedclustersetbinding <cluster set name> -n <workspace-namespace> -oyaml:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSetBinding
metadata:
  name: example-clusterset
  namespace: default
spec:
  clusterSet: example-clusterset
status:
  conditions:
  - lastTransitionTime: "2022-12-19T09:55:10Z"
    message: ""
    reason: ClusterSetBound
    status: "True"
    type: Bound

Clusterset RBAC permission control

Adding member cluster to a clusterset

Adding a new member cluster to a clusterset requires RBAC permission of updating the managed cluster and managedclustersets/join subresource. We can manually apply the following clusterrole to allow a hub user to manipulate that clusterset:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata: ...
rules:
  - apiGroups:
      - cluster.open-cluster-management.io
    resources:
      - managedclusters
    verbs:
      - update
  - apiGroups:
      - cluster.open-cluster-management.io
    resources:
      - managedclustersets/join
    verbs:
      - create

Binding a clusterset to a namespace

The “binding” process of a cluster set is policed by a validating webhook that checks whether the requester has sufficient RBAC access to the managedclustersets/bind subresource. We can also manually apply the following clusterrole to grant a hub user the permission to bind cluster sets:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata: ...
rules:
  - apiGroups:
      - cluster.open-cluster-management.io
    resources:
      - managedclustersets/bind
    verbs:
      - create

Default ManagedClusterSet

For easier management, we introduce a ManagedClusterSet called default. A default ManagedClusterSet will be automatically created initially. Any clusters not specifying a ManagedClusterSet will be added into the default. The user can move the cluster from the default clusterset to another clusterset using the command:

clusteradm clusterset set target-clusterset --clusters cluster-name

default clusterset is an alpha feature that can be disabled by disabling the feature gate in registration controller as: - "--feature-gates=DefaultClusterSet=false"

Global ManagedClusterSet

For easier management, we also introduce a ManagedClusterSet called global. A global ManagedClusterSet will be automatically created initially. The global ManagedClusterSet include all ManagedClusters.

global clusterset is an alpha feature that can be disabled by disabling the feature gate in registration controller as: - "--feature-gates=DefaultClusterSet=false"

global ManagedClusterSet detail:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: global
spec:
  clusterSelector:
    labelSelector: {}
    selectorType: LabelSelector
status:
  conditions:
  - lastTransitionTime: "2022-06-20T08:23:28Z"
    message: 1 ManagedClusters selected
    reason: ClustersSelected
    status: "False"
    type: ClusterSetEmpty

5 - Placement

CHANGE NOTE:

  • The Placement and PlacementDecision API v1alpha1 version will no longer be served in OCM v0.9.0.

    • Migrate manifests and API clients to use the Placement and PlacementDecision API v1beta1 version, available since OCM v0.7.0.
    • All existing persisted objects are accessible via the new API.
    • Notable changes:
      • The field spec.prioritizerPolicy.configurations.name in Placement API v1alpha1 is removed and replaced by spec.prioritizerPolicy.configurations.scoreCoordinate.builtIn in v1beta1.
  • Clusters in terminating state will not be selected by placements from OCM v0.14.0.

Overall

Placement concept is used to dynamically select a set of managedClusters in one or multiple ManagedClusterSet so that higher level users can either replicate Kubernetes resources to the member clusters or run their advanced workload i.e. multi-cluster scheduling.

The “input” and “output” of the scheduling process are decoupled into two separated Kubernetes API Placement and PlacementDecision. As is shown in the following picture, we prescribe the scheduling policy in the spec of Placement API and the placement controller in the hub will help us to dynamically select a slice of managed clusters from the given cluster sets. The selected clusters will be listed in PlacementDecision.

Placement

Following the architecture of Kubernetes’ original scheduling framework, the multi-cluster scheduling is logically divided into two phases internally:

  • Predicate: Hard requirements for the selected clusters.
  • Prioritize: Rank the clusters by the soft requirements and select a subset among them.

Select clusters in ManagedClusterSet

By following the previous section about ManagedClusterSet, now we’re supposed to have one or multiple valid cluster sets in the hub clusters. Then we can move on and create a placement in the “workspace namespace” by specifying predicates and prioritizers in the Placement API to define our own multi-cluster scheduling policy.

Notes:

  • Clusters in terminating state will not be selected by placements.

Predicates

Label/Claim selection

In the predicates section, you can select clusters by labels or clusterClaims. For instance, you can select 3 clusters with label purpose=test and clusterClaim platform.open-cluster-management.io=aws as seen in the following examples:

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: placement1
  namespace: default
spec:
  numberOfClusters: 3
  clusterSets:
    - prod
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchLabels:
            purpose: test
        claimSelector:
          matchExpressions:
            - key: platform.open-cluster-management.io
              operator: In
              values:
                - aws

Note that the distinction between label-selecting and claim-selecting is elaborated in this page about how to extend attributes for the managed clusters.

Taints/Tolerations

To support filtering unhealthy/not-reporting clusters and keep workloads from being placed in unhealthy or unreachable clusters, we introduce the similar concept of taint/toleration in Kubernetes. It also allows user to add a customized taint to deselect a cluster from placement. This is useful when the user wants to set a cluster to maintenance mode and evict workload from this cluster.

In OCM, Taints and Tolerations work together to allow users to control the selection of managed clusters more flexibly.

Taints are properties of ManagedClusters, they allow a Placement to repel a set of ManagedClusters in predicates stage.

Tolerations are applied to Placements, and allow Placements to select ManagedClusters with matching taints.

The following example shows how to tolerate clusters with taints.

  • Tolerate clusters with taint

    Suppose your managed cluster has taint added as below.

    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
      name: cluster1
    spec:
      hubAcceptsClient: true
      taints:
        - effect: NoSelect
          key: gpu
          value: "true"
          timeAdded: '2022-02-21T08:11:06Z'
    

    By default, the placement won’t select this cluster unless you define tolerations.

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement1
      namespace: ns1
    spec:
      tolerations:
        - key: gpu
          value: "true"
          operator: Equal
    

    With the above tolerations defined, cluster1 could be selected by placement because of the key: gpu and value: "true" match.

  • Tolerate clusters with taint for a period of time

    TolerationSeconds represents the period of time the toleration tolerates the taint. It could be used for the case like, when a managed cluster gets offline, users can make applications deployed on this cluster to be transferred to another available managed cluster after a tolerated time.

    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
      name: cluster1
    spec:
      hubAcceptsClient: true
      taints:
        - effect: NoSelect
          key: cluster.open-cluster-management.io/unreachable
          timeAdded: '2022-02-21T08:11:06Z'
    

    If define a placement with TolerationSeconds as below, then the workload will be transferred to another available managed cluster after 5 minutes.

    apiVersion: cluster.open-cluster-management.io/v1alpha1
    kind: Placement
    metadata:
      name: placement1
      namespace: ns1
    spec:
      tolerations:
        - key: cluster.open-cluster-management.io/unreachable
          operator: Exists
          tolerationSeconds: 300
    

In tolerations section, it includes the following fields:

  • Key (optional). Key is the taint key that the toleration applies to.
  • Value (optional). Value is the taint value the toleration matches to.
  • Operator (optional). Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. A toleration “matches” a taint if the keys are the same and the effects are the same, and the operator is:
    • Equal. The operator is Equal and the values are equal.
    • Exists. Exists is equivalent to wildcard for value, so that a placement can tolerate all taints of a particular category.
  • Effect (optional). Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSelect, PreferNoSelect and NoSelectIfNew. (PreferNoSelect is not implemented yet, currently clusters with effect PreferNoSelect will always be selected.)
  • TolerationSeconds (optional). TolerationSeconds represents the period of time the toleration (which must be of effect NoSelect/PreferNoSelect, otherwise this field is ignored) tolerates the taint. The default value is nil, which indicates it tolerates the taint forever. The start time of counting the TolerationSeconds should be the TimeAdded in Taint, not the cluster scheduled time or TolerationSeconds added time.

Prioritizers

Score-based prioritizer

In prioritizerPolicy section, you can define the policy of prioritizers.

The following example shows how to select clusters with prioritizers.

  • Select a cluster with the largest allocatable memory.

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement1
      namespace: ns1
    spec:
      numberOfClusters: 1
      prioritizerPolicy:
        configurations:
          - scoreCoordinate:
              builtIn: ResourceAllocatableMemory
    

    The prioritizer policy has default mode additive and default prioritizers Steady and Balance.

    In the above example, the prioritizers actually come into effect are Steady, Balance and ResourceAllocatableMemory.

    And the end of this section has more description about the prioritizer policy mode and default prioritizers.

  • Select a cluster with the largest allocatable CPU and memory, and make placement sensitive to resource changes.

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement1
      namespace: ns1
    spec:
      numberOfClusters: 1
      prioritizerPolicy:
        configurations:
          - scoreCoordinate:
              builtIn: ResourceAllocatableCPU
            weight: 2
          - scoreCoordinate:
              builtIn: ResourceAllocatableMemory
            weight: 2
    

    The prioritizer policy has default mode additive and default prioritizers Steady and Balance, and their default weight is 1.

    In the above example, the prioritizers actually come into effect are Steady with weight 1, Balance with weight 1, ResourceAllocatableCPU with weight 2 and ResourceAllocatableMemory with weight 2. The cluster score will be a combination of the 4 prioritizers score. Since ResourceAllocatableCPU and ResourceAllocatableMemory have higher weight, they will be weighted more in the results, and make placement sensitive to resource changes.

    And the end of this section has more description about the prioritizer weight and how the final score is calculated.

  • Select two clusters with the largest addon score CPU ratio, and pin the placement decisions.

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement1
      namespace: ns1
    spec:
      numberOfClusters: 2
      prioritizerPolicy:
        mode: Exact
        configurations:
          - scoreCoordinate:
              builtIn: Steady
            weight: 3
          - scoreCoordinate:
              type: AddOn
              addOn:
                resourceName: default
                scoreName: cpuratio
    

    In the above example, explicitly define the mode as exact. The prioritizers actually come into effect are Steady with weight 3 and addon score cpuratio with weight 1. Go into the Extensible scheduling section to learn more about addon score.

In prioritizerPolicy section, it includes the following fields:

  • mode is either Exact, Additive or "", where "" is Additive by default.
    • In Additive mode, any prioritizer not explicitly enumerated is enabled in its default Configurations, in which Steady and Balance prioritizers have the weight of 1 while other prioritizers have the weight of 0. Additive doesn’t require configuring all prioritizers. The default Configurations may change in the future, and additional prioritization will happen.
    • In Exact mode, any prioritizer not explicitly enumerated is weighted as zero. Exact requires knowing the full set of prioritizers you want, but avoids behavior changes between releases.
  • configurations represents the configuration of prioritizers.
    • scoreCoordinate represents the configuration of the prioritizer and score source.
      • type defines the type of the prioritizer score. Type is either BuiltIn, AddOn or “”, where "" is BuiltIn by default. When the type is BuiltIn, a BuiltIn prioritizer name must be specified. When the type is AddOn, need to configure the score source in AddOn.
        • builtIn defines the name of a BuiltIn prioritizer. Below are the valid BuiltIn prioritizer names.
          • Balance: balance the decisions among the clusters.
          • Steady: ensure the existing decision is stabilized.
          • ResourceAllocatableCPU: sort clusters based on the allocatable CPU.
          • ResourceAllocatableMemory: sort clusters based on the allocatable memory.
        • addOn defines the resource name and score name. AddOnPlacementScore is introduced to describe addon scores, go into the Extensible scheduling section to learn more about it.
          • resourceName defines the resource name of the AddOnPlacementScore. The placement prioritizer selects AddOnPlacementScore CR by this name.
          • scoreName defines the score name inside AddOnPlacementScore. AddOnPlacementScore contains a list of score name and score value, scoreName specifies the score to be used by the prioritizer.
    • weight defines the weight of the prioritizer. The value must be ranged in [-10,10]. Each prioritizer will calculate an integer score of a cluster in the range of [-100, 100]. The final score of a cluster will be sum(weight * prioritizer_score). A higher weight indicates that the prioritizer weights more in the cluster selection, while 0 weight indicates that the prioritizer is disabled. A negative weight indicates wanting to select the last ones.

Extensible scheduling

In placement resource based scheduling, in some cases the prioritizer needs extra data (more than the default value provided by ManagedCluster) to calculate the score of the managed cluster. For example, schedule the clusters based on cpu or memory usage data of the clusters fetched from a monitoring system.

So we provide a new API AddOnPlacementScore to support a more extensible way to schedule based on customized scores.

  • As a user, as mentioned in the above section, can specify the score in placement yaml to select clusters.
  • As a score provider, a 3rd party controller could run on either hub or managed cluster, to maintain the lifecycle of AddOnPlacementScore and update score into it.

Extend the multi-cluster scheduling capabilities with placement introduces how to implement a customized score provider.

Refer to the enhancements to learn more.

PlacementDecisions

A slice of PlacementDecision will be created by placement controller in the same namespace, each with a label of cluster.open-cluster-management.io/placement={placement name}. PlacementDecision contains the results of the cluster selection as seen in the following examples.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: PlacementDecision
metadata:
  labels:
    cluster.open-cluster-management.io/placement: placement1
  name: placement1-decision-1
  namespace: default
status:
  decisions:
    - clusterName: cluster1
    - clusterName: cluster2
    - clusterName: cluster3

The status.decisions lists the top N clusters with the highest score and ordered by names. The status.decisions changes over time, the scheduling result update based on what endpoints exist.

The scheduling result in the PlacementDecision API is designed to be paginated with its page index as the name’s suffix to avoid “too large object” issue from the underlying Kubernetes API framework.

PlacementDecision can be consumed by another operand to decide how the workload should be placed in multiple clusters.

Decision strategy

The decisionStrategy section of Placement can be used to divide the created PlacementDecision into groups and define the number of clusters per decision group.

Assume an environment has 310 clusters, 10 of which have the label prod-canary-west and 10 have the label prod-canary-east. The following example demonstrates how to group the clusters with the labels prod-canary-west and prod-canary-east into 2 groups, and group the remaining clusters into groups with a maximum of 150 clusters each.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: placement1
  namespace: default
spec:
  clusterSets:
    - global
  decisionStrategy:
    groupStrategy:
      clustersPerDecisionGroup: 150
      decisionGroups:
      - groupName: prod-canary-west
        groupClusterSelector:
          labelSelector:
            matchExpressions:
              - key: prod-canary-west
                operator: Exists
      - groupName: prod-canary-east
        groupClusterSelector:
          labelSelector:
            matchExpressions:
              - key: prod-canary-east
                operator: Exists

The decisionStrategy section includes the following fields:

  • decisionGroups: Represents a list of predefined groups to put decision results. Decision groups will be constructed based on the decisionGroups field at first. The clusters not included in the decisionGroups will be divided to other decision groups afterwards. Each decision group should not have the number of clusters larger than the clustersPerDecisionGroup.
    • groupName: Represents the name to be added as the value of label key cluster.open-cluster-management.io/decision-group-name of created PlacementDecisions.
    • groupClusterSelector: Defines the label selector to select clusters subset by label.
  • clustersPerDecisionGroup: A specific number or percentage of the total selected clusters. The specific number will divide the placementDecisions to decisionGroups, the max number of clusters in each group equal to that specific number.

With this decision strategy defined, the placement status will list the group result, including the decision group name and index, the cluster count, and the corresponding PlacementDecision names.

status:
...
  decisionGroups:
  - clusterCount: 10
    decisionGroupIndex: 0
    decisionGroupName: prod-canary-west
    decisions:
    - placement1-decision-1
  - clusterCount: 10
    decisionGroupIndex: 1
    decisionGroupName: prod-canary-east
    decisions:
    - placement1-decision-2
  - clusterCount: 150
    decisionGroupIndex: 2
    decisionGroupName: ""
    decisions:
    - placement1-decision-3
    - placement1-decision-4
  - clusterCount: 140
    decisionGroupIndex: 3
    decisionGroupName: ""
    decisions:
    - placement1-decision-5
    - placement1-decision-6
  numberOfSelectedClusters: 310

The PlacementDecision will have labels cluster.open-cluster-management.io/decision-group-name and cluster.open-cluster-management.io/decision-group-index to indicate which group name and group index it belongs to.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: PlacementDecision
metadata:
  labels:
    cluster.open-cluster-management.io/placement: placement1
    cluster.open-cluster-management.io/decision-group-index: "0"
    cluster.open-cluster-management.io/decision-group-name: prod-canary-west
  name: placement1-decision-1
  namespace: default
...

Rollout Strategy

Rollout Strategy API facilitate the use of placement decision strategy with OCM workload applier APIs such as Policy, Addon and ManifestWorkReplicaSet to apply workloads.

    placements:
    - name: placement-example
      rolloutStrategy:
        type: Progressive
        progressive:
          mandatoryDecisionGroups:
          - groupName: "prod-canary-west"
          - groupName: "prod-canary-east"
          maxConcurrency: 25%
          minSuccessTime: 5m
          progressDeadline: 10m
          maxFailures: 2

The Rollout Strategy API provides three rollout types;

  1. All: means apply the workload to all clusters in the decision groups at once.
  2. Progressive: means apply the workload to the selected clusters progressively per cluster. The workload will not be applied to the next cluster unless one of the current applied clusters reach the successful state and haven’t breached the MaxFailures configuration.
  3. ProgressivePerGroup: means apply the workload to decisionGroup clusters progressively per group. The workload will not be applied to the next decisionGroup unless all clusters in the current group reach the successful state and haven’t breached the MaxFailures configuration.

The RollOut Strategy API also provides rollOut config to fine-tune the workload apply progress based on the use-case requirements;

  1. MinSuccessTime: defined in seconds/minutes/hours for how long workload applier controller will wait from the beginning of the rollout to proceed with the next rollout, assuming a successful state had been reached and MaxFailures hasn’t been breached. Default is 0 meaning the workload applier proceeds immediately after a successful state is reached.
  2. ProgressDeadline: defined in seconds/minutes/hours for how long workload applier controller will wait until the workload reaches a successful state in the spoke cluster. If the workload does not reach a successful state after ProgressDeadline, the controller will stop waiting and workload will be treated as “timeout” and be counted into MaxFailures. Once the MaxFailures is breached, the rollout will stop. Default value is “None”, meaning the workload applier will wait for a successful state indefinitely.
  3. MaxFailures: defined as the maximum percentage of or number of clusters that can fail in order to proceed with the rollout. Fail means the cluster has a failed status or timeout status (does not reach successful status after ProgressDeadline). Once the MaxFailures is breached, the rollout will stop. Default is 0 means that no failures are tolerated.
  4. MaxConcurrency: is the max number of clusters to deploy workload concurrently. The MaxConcurrency can be defined only in case rollout type is progressive.
  5. MandatoryDecisionGroups: is a list of decision groups to apply the workload first. If mandatoryDecisionGroups not defined the decision group index is considered to apply the workload in groups by order. The MandatoryDecisionGroups can be defined only in case rollout type is progressive or progressivePerGroup.

Troubleshooting

If no PlacementDecision generated after you creating Placement, you can run below commands to troubleshoot.

Check the Placement conditions

For example:

$ kubectl describe placement <placement-name>
Name:         demo-placement
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cluster.open-cluster-management.io/v1beta1
Kind:         Placement
...
Status:
  Conditions:
    Last Transition Time:       2022-09-30T07:39:45Z
    Message:                    Placement configurations check pass
    Reason:                     Succeedconfigured
    Status:                     False
    Type:                       PlacementMisconfigured
    Last Transition Time:       2022-09-30T07:39:45Z
    Message:                    No valid ManagedClusterSetBindings found in placement namespace
    Reason:                     NoManagedClusterSetBindings
    Status:                     False
    Type:                       PlacementSatisfied
  Number Of Selected Clusters:  0
...

The Placement has 2 types of condition, PlacementMisconfigured and PlacementSatisfied.

  • If the condition PlacementMisconfigured is true, means your placement has configuration errors, the message tells you more details about the failure.
  • If the condition PlacementSatisfied is false, means no ManagedCluster satisfy this placement, the message tells you more details about the failure. In this example, it is because no ManagedClusterSetBindings found in placement namespace.

Check the Placement events

For example:

$ kubectl describe placement <placement-name>
Name:         demo-placement
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cluster.open-cluster-management.io/v1beta1
Kind:         Placement
...
Events:
  Type    Reason          Age   From                 Message
  ----    ------          ----  ----                 -------
  Normal  DecisionCreate  2m10s   placementController  Decision demo-placement-decision-1 is created with placement demo-placement in namespace default
  Normal  DecisionUpdate  2m10s   placementController  Decision demo-placement-decision-1 is updated with placement demo-placement in namespace default
  Normal  ScoreUpdate     2m10s   placementController  cluster1:0 cluster2:100 cluster3:200
  Normal  DecisionUpdate  3s      placementController  Decision demo-placement-decision-1 is updated with placement demo-placement in namespace default
  Normal  ScoreUpdate     3s      placementController  cluster1:200 cluster2:145 cluster3:189 cluster4:200

The placement controller will give a score to each filtered ManagedCluster and generate an event for it. When the cluster score changes, a new event will generate. You can check the score of each cluster in the Placment events, to know why some clusters with lower score are not selected.

Debug

If you want to know more defails of how clusters are selected in each step, can following below step to access the debug endpoint.

Create clusterrole “debugger” to access debug path and bind this to anonymous user.

kubectl create clusterrole "debugger" --verb=get --non-resource-url="/debug/*"
kubectl create clusterrolebinding debugger --clusterrole=debugger --user=system:anonymous

Export placement 8443 port to local.

kubectl port-forward -n open-cluster-management-hub deploy/cluster-manager-placement-controller 8443:8443

Curl below url to debug one specific placement.

curl -k  https://127.0.0.1:8443/debug/placements/<namespace>/<name>

For example, the environment has a Placement named placement1 in default namespace, which selects 2 ManagedClusters, the output would be like:

$ curl -k  https://127.0.0.1:8443/debug/placements/default/placement1
{"filteredPiplieResults":[{"name":"Predicate","filteredClusters":["cluster1","cluster2"]},{"name":"Predicate,TaintToleration","filteredClusters":["cluster1","cluster2"]}],"prioritizeResults":[{"name":"Balance","weight":1,"scores":{"cluster1":100,"cluster2":100}},{"name":"Steady","weight":1,"scores":{"cluster1":100,"cluster2":100}}]}

Future work

In addition to selecting cluster by predicates, we are still working on other advanced features including

6 - ManifestWork

What is ManifestWork

ManifestWork is used to define a group of Kubernetes resources on the hub to be applied to the managed cluster. In the open-cluster-management project, a ManifestWork resource must be created in the cluster namespace. A work agent implemented in work project is run on the managed cluster and monitors the ManifestWork resource in the cluster namespace on the hub cluster.

An example of ManifestWork to deploy a deployment to the managed cluster is shown in the following example.

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: <target managed cluster>
  name: hello-work-demo
spec:
  workload:
    manifests:
      - apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: hello
          namespace: default
        spec:
          selector:
            matchLabels:
              app: hello
          template:
            metadata:
              labels:
                app: hello
            spec:
              containers:
                - name: hello
                  image: quay.io/asmacdo/busybox
                  command:
                    ["sh", "-c", 'echo "Hello, Kubernetes!" && sleep 3600']

Status tracking

Work agent will track all the resources defined in ManifestWork and update its status. There are two types of status in manifestwork. The resourceStatus tracks the status of each manifest in the ManifestWork and conditions reflects the overall status of the ManifestWork. Work agent currently checks whether a resource is Available, meaning the resource exists on the managed cluster, and Applied means the resource defined in ManifestWork has been applied to the managed cluster.

Here is an example.

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata: ...
spec: ...
status:
  conditions:
    - lastTransitionTime: "2021-06-15T02:26:02Z"
      message: Apply manifest work complete
      reason: AppliedManifestWorkComplete
      status: "True"
      type: Applied
    - lastTransitionTime: "2021-06-15T02:26:02Z"
      message: All resources are available
      reason: ResourcesAvailable
      status: "True"
      type: Available
  resourceStatus:
    manifests:
      - conditions:
          - lastTransitionTime: "2021-06-15T02:26:02Z"
            message: Apply manifest complete
            reason: AppliedManifestComplete
            status: "True"
            type: Applied
          - lastTransitionTime: "2021-06-15T02:26:02Z"
            message: Resource is available
            reason: ResourceAvailable
            status: "True"
            type: Available
        resourceMeta:
          group: apps
          kind: Deployment
          name: hello
          namespace: default
          ordinal: 0
          resource: deployments
          version: v1

Fine-grained field values tracking

Optionally, we can let the work agent aggregate and report certain fields from the distributed resources to the hub clusters by setting FeedbackRule for the ManifestWork:

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata: ...
spec:
  workload: ...
  manifestConfigs:
    - resourceIdentifier:
        group: apps
        resource: deployments
        namespace: default
        name: hello
      feedbackRules:
        - type: WellKnownStatus
        - type: JSONPaths
          jsonPaths:
            - name: isAvailable
              path: '.status.conditions[?(@.type=="Available")].status'

The feedback rules prescribe the work agent to periodically get the latest states of the resources, and scrape merely those expected fields from them, which is helpful for trimming the payload size of the status. Note that the collected feedback values on the ManifestWork will not be updated unless the latest value is changed/different from the previous recorded value. Currently, it supports two kinds of FeedbackRule:

  • WellKnownStatus: Using the pre-built template of feedback values for those well-known kubernetes resources.
  • JSONPaths: A valid Kubernetes JSON-Path that selects a scalar field from the resource. Currently supported types are Integer, String, Boolean and JsonRaw. JsonRaw returns only when you have enabled the RawFeedbackJsonString feature gate on the agent. The agent will return the whole structure as a JSON string.

The default feedback value scraping interval is 30 second, and we can override it by setting --status-sync-interval on your work agent. Too short period can cause excessive burden to the control plane of the managed cluster, so generally a recommended lower bound for the interval is 5 second.

In the end, the scraped values from feedback rules will be shown in the status:

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata: ...
spec: ...
status:
  resourceStatus:
    manifests:
    - conditions: ...
      resourceMeta: ...
      statusFeedback:
        values:
        - fieldValue:
            integer: 1
            type: Integer
          name: ReadyReplicas
        - fieldValue:
            integer: 1
            type: Integer
          name: Replicas
        - fieldValue:
            integer: 1
            type: Integer
          name: AvailableReplicas
        - fieldValue:
            string: "True"
            type: String
          name: isAvailable

Garbage collection

To ensure the resources applied by ManifestWork are reliably recorded, the work agent creates an AppliedManifestWork on the managed cluster for each ManifestWork as an anchor for resources relating to ManifestWork. When ManifestWork is deleted, work agent runs a Foreground deletion, that ManifestWork will stay in deleting state until all its related resources has been fully cleaned in the managed cluster.

Delete options

User can explicitly choose not to garbage collect the applied resources when a ManifestWork is deleted. The user should specify the deleteOption in the ManifestWork. By default, deleteOption is set as Foreground which means the applied resources on the spoke will be deleted with the removal of ManifestWork. User can set it to Orphan so the applied resources will not be deleted. Here is an example:

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata: ...
spec:
  workload: ...
  deleteOption:
    propagationPolicy: Orphan

Alternatively, user can also specify a certain resource defined in the ManifestWork to be orphaned by setting the deleteOption to be SelectivelyOrphan. Here is an example with SelectivelyOrphan specified. It ensures the removal of deployment resource specified in the ManifestWork while the service resource is kept.

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  name: selective-delete-work
spec:
  workload: ...
  deleteOption:
    propagationPolicy: SelectivelyOrphan
    selectivelyOrphans:
      orphaningRules:
      - group: ""
        resource: services
        namespace: default
        name: helloworld

Resource Race and Adoption

It is possible to create two ManifestWorks for the same cluster with the same resource defined. For example, the user can create two Manifestworks on cluster1, and both Manifestworks have the deployment resource hello in default namespace. If the content of the resource is different, the two ManifestWorks will fight, and it is desired since each ManifestWork is treated as equal and each ManifestWork is declaring the ownership of the resource. If there is another controller on the managed cluster that tries to manipulate the resource applied by a ManifestWork, this controller will also fight with work agent.

When one of the ManifestWork is deleted, the applied resource will not be removed no matter DeleteOption is set or not. The remaining ManifestWork will still keep the ownership of the resource.

To resolve such conflict, user can choose a different update strategy to alleviate the resource conflict.

  • CreateOnly: with this strategy, the work-agent will only ensure creation of the certain manifest if the resource does not exist. work-agent will not update the resource, hence the ownership of the whole resource can be taken over by another ManifestWork or controller.
  • ServerSideApply: with this strategy, the work-agent will run server side apply for the certain manifest. The default field manager is work-agent, and can be customized. If another ManifestWork or controller takes the ownership of a certain field in the manifest, the original ManifestWork will report conflict. User can prune the original ManifestWork so only field that it will own maintains.
  • ReadOnly: with this strategy, the work-agent will not apply manifests onto the cluster, but it still can read resource fields and return results when feedback rules are defined. Only metadata of the manifest is required to be defined in the spec of the ManifestWork with this strategy.

An example of using ServerSideApply strategy as following:

  1. User creates a ManifestWork with ServerSideApply specified:
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: <target managed cluster>
  name: hello-work-demo
spec:
  workload: ...
  manifestConfigs:
    - resourceIdentifier:
        group: apps
        resource: deployments
        namespace: default
        name: hello
      updateStrategy:
        type: ServerSideApply
  1. User creates another ManifestWork with ServerSideApply but with different field manager.
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: <target managed cluster>
  name: hello-work-replica-patch
spec:
  workload:
    manifests:
      - apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: hello
          namespace: default
        spec:
          replicas: 3
  manifestConfigs:
    - resourceIdentifier:
        group: apps
        resource: deployments
        namespace: default
        name: hello
      updateStrategy:
        type: ServerSideApply
        serverSideApply:
          force: true
          fieldManager: work-agent-another

The second ManifestWork only defines replicas in the manifest, so it takes the ownership of replicas. If the first ManifestWork is updated to add replicas field with different value, it will get conflict condition and manifest will not be updated by it.

Instead of create the second ManifestWork, user can also set HPA for this deployment. HPA will also take the ownership of replicas, and the update of replicas field in the first ManifestWork will return conflict condition.

Permission setting for work agent

All workload manifests are applied to the managed cluster by the work agent, and by default the work agent has the following permission for the managed cluster:

  • clusterRole admin(instead of the cluster-admin) to apply kubernetes common resources
  • managing customresourcedefinitions, but can not manage a specific custom resource instance
  • managing clusterrolebindings, rolebindings, clusterroles, roles, including the bind and escalate permission, this is why we can grant work-agent service account extra permissions using ManifestWork

So if the workload manifests to be applied on the managed cluster exceeds the above permission, for example some Customer Resource instances, there will be an error ... is forbidden: User "system:serviceaccount:open-cluster-management-agent:klusterlet-work-sa" cannot get resource ... reflected on the ManifestWork status.

To prevent this, the service account klusterlet-work-sa used by the work-agent needs to be given the corresponding permissions. There are several ways:

  • add permission on the managed cluster directly, we can
    • aggregate the new clusterRole for your to-be-applied resources to the existing admin clusterRole
    • OR create role/clusterRole roleBinding/clusterRoleBinding for the klusterlet-work-sa service account
  • add permission on the hub cluster by another ManifestWork, the ManifestWork includes
    • an clusterRole with label "open-cluster-management.io/aggregate-to-work": "true" for your to-be-applied resources, the rules defined in the clusterRole will be aggregated to the work agent(OCM version >= v0.12.0)
    • OR role/clusterRole roleBinding/clusterRoleBinding for the klusterlet-work-sa service account

Below is an example use ManifestWork to give klusterlet-work-sa permission for resource machines.cluster.x-k8s.io

  • Option 1: Use aggregated clusterRole
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: cluster1
  name: permission-set
spec:
  workload:
    manifests:
      - apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata:
          name: open-cluster-management:klusterlet-work:my-role
          labels:
            open-cluster-management.io/aggregate-to-work: "true"  # with this label, the clusterRole will be selected to aggregate
        rules:
          # Allow agent to managed machines
          - apiGroups: ["cluster.x-k8s.io"]
            resources: ["machines"]
            verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  • Option 2: Use clusterRole and clusterRoleBinding
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: cluster1
  name: permission-set
spec:
  workload:
    manifests:
      - apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata:
          name: open-cluster-management:klusterlet-work:my-role
        rules:
          # Allow agent to managed machines
          - apiGroups: ["cluster.x-k8s.io"]
            resources: ["machines"]
            verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          name: open-cluster-management:klusterlet-work:my-binding
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: open-cluster-management:klusterlet-work:my-role
        subjects:
          - kind: ServiceAccount
            name: klusterlet-work-sa
            namespace: open-cluster-management-agent

Treating defaulting/immutable fields in API

The kube-apiserver sets the defaulting/immutable fields for some APIs if the user does not set them. And it may fail to deploy these APIs using ManifestWork. Because in the reconcile loop, the work agent will try to update the immutable or default field after comparing the desired manifest in the ManifestWork and existing resource in the cluster, and the update will fail or not take effect.

Let’s use Job as an example. The kube-apiserver will set a default selector and label on the Pod of Job if the user does not set spec.Selector in the Job. The fields are immutable, so the ManifestWork will report AppliedManifestFailed when we apply a Job without spec.Selector using ManifestWork.

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: cluster1
  name: exmaple-job
spec:
  workload:
    manifests:
      - apiVersion: batch/v1
        kind: Job
        metadata:
          name: pi
          namespace: default
        spec:
          template:
            spec:
              containers:
              - name: pi
                image: perl:5.34.0
                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
              restartPolicy: Never
          backoffLimit: 4

There are 2 options to fix this issue.

  1. Specify the fields manually if they are configurable. For example, set spec.manualSelector=true and your own labels in the spec.selector of the Job, and set the same labels for the containers.
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: cluster1
  name: exmaple-job-1
spec:
  workload:
    manifests:
      - apiVersion: batch/v1
        kind: Job
        metadata:
          name: pi
          namespace: default
        spec:
          manualSelector: true
          selector:
            matchLabels:
              job: pi
          template:
            metadata:
              labels:
                job: pi
            spec:
              containers:
              - name: pi
                image: perl:5.34.0
                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
              restartPolicy: Never
          backoffLimit: 4
  1. Set the updateStrategy ServerSideApply in the ManifestWork for the API.
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: cluster1
  name: exmaple-job
spec:
  manifestConfigs:
    - resourceIdentifier:
        group: batch
        resource: jobs
        namespace: default
        name: pi
      updateStrategy:
        type: ServerSideApply
  workload:
    manifests:
      - apiVersion: batch/v1
        kind: Job
        metadata:
          name: pi
          namespace: default
        spec:
          template:
            spec:
              containers:
              - name: pi
                image: perl:5.34.0
                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
              restartPolicy: Never
          backoffLimit: 4

Dynamic identity authorization

All manifests in ManifestWork are applied by the work-agent using the mounted service account to raise requests against the managed cluster by default. And the work agent has very high permission to access the managed cluster which means that any hub user with write access to the ManifestWork resources will be able to dispatch any resources that the work-agent can manipulate to the managed cluster.

The executor subject feature(introduced in release 0.9.0) provides a way to clarify the owner identity(executor) of the ManifestWork before it takes effect so that we can explicitly check whether the executor has sufficient permission in the managed cluster.

The following example clarifies the owner “executor1” of the ManifestWork, so before the work-agent applies the “default/test” ConfigMap to the managed cluster, it will first check whether the ServiceAccount “default/executor” has the permission to apply this ConfigMap

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  namespace: cluster1
  name: example-manifestwork
spec:
  executor:
    subject:
      type: ServiceAccount
      serviceAccount:
        namespace: default
        name: executor1
  workload:
    manifests:
      - apiVersion: v1
        data:
          a: b
        kind: ConfigMap
        metadata:
          namespace: default
          name: test

Not any hub user can specify any executor at will. Hub users can only use the executor for which they have an execute-as(virtual verb) permission. For example, hub users bound to the following Role can use the “executor1” ServiceAccount in the “default” namespace on the managed cluster.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster1-executor1
  namespace: cluster1
rules:
- apiGroups:
  - work.open-cluster-management.io
  resources:
  - manifestworks
  verbs:
  - execute-as
  resourceNames:
  - system:serviceaccount:default:executor1

For backward compatibility, if the executor is absent, the work agent will keep using the mounted service account to apply resources. But using the executor is encouraged, so we have a feature gate NilExecutorValidating to control whether any hub user is allowed to not set the executor. It is disabled by default, we can use the following configuration to the ClusterManager to enable it. When it is enabled, not setting executor will be regarded as using the “/klusterlet-work-sa” (namespace is empty, name is klusterlet-work-sa) virtual service account on the managed cluster for permission verification, which means only hub users with “execute-as” permissions on the “system:serviceaccount::klusterlet-work-sa” ManifestWork are allowed not to set the executor.

spec:
  workConfiguration:
    featureGates:
    - feature: NilExecutorValidating
      mode: Enable

Work-agent uses the SubjectAccessReview API to check whether an executor has permission to the manifest resources, which will cause a large number of SAR requests to the managed cluster API-server, so we provided a new feature gate ExecutorValidatingCaches(in release 0.10.0) to cache the result of the executor’s permission to the manifest resource, it is only works when the managed cluster uses RBAC mode authorization, and is disabled by default as well, but can be enabled by using the following configuration for Klusterlet:

spec:
  workConfiguration:
    featureGates:
    - feature: ExecutorValidatingCaches
      mode: Enable

Enhancement proposal: Work Executor Group

7 - ManifestWorkReplicaSet

What is ManifestWorkReplicaSet

ManifestWorkReplicaSet is an aggregator API that uses Manifestwork and Placement to create manifestwork for the placement-selected clusters.

View an example of ManifestWorkReplicaSet to deploy a CronJob and Namespace for a group of clusters selected by placements.

apiVersion: work.open-cluster-management.io/v1alpha1
kind: ManifestWorkReplicaSet
metadata:
  name: mwrset-cronjob
  namespace: ocm-ns
spec:
  placementRefs:
    - name: placement-rollout-all # Name of a created Placement
      rolloutStrategy:
        rolloutType: All
    - name: placement-rollout-progressive # Name of a created Placement
      rolloutStrategy:
        rolloutType: Progressive
        progressive:
          minSuccessTime: 5m
          progressDeadline: 10m
          maxFailures: 5%
          mandatoryDecisionGroups:
          - groupName: "prod-canary-west"
          - groupName: "prod-canary-east"
    - name: placement-rollout-progressive-per-group # Name of a created Placement
      rolloutStrategy:
        rolloutType: ProgressivePerGroup
        progressivePerGroup:
          progressDeadline: 10m
          maxFailures: 2
  manifestWorkTemplate:
    deleteOption:
      propagationPolicy: SelectivelyOrphan
      selectivelyOrphans:
        orphaningRules:
          - group: ''
            name: ocm-ns
            namespace: ''
            resource: Namespace
    manifestConfigs:
      - feedbackRules:
          - jsonPaths:
              - name: lastScheduleTime
                path: .status.lastScheduleTime
              - name: lastSuccessfulTime
                path: .status.lastSuccessfulTime
            type: JSONPaths
        resourceIdentifier:
          group: batch
          name: sync-cronjob
          namespace: ocm-ns
          resource: cronjobs
    workload:
      manifests:
        - kind: Namespace
          apiVersion: v1
          metadata:
            name: ocm-ns
        - kind: CronJob
          apiVersion: batch/v1
          metadata:
            name: sync-cronjob
            namespace: ocm-ns
          spec:
            schedule: '* * * * *'
            concurrencyPolicy: Allow
            suspend: false
            jobTemplate:
              spec:
                backoffLimit: 2
                template:
                  spec:
                    containers:
                      - name: hello
                        image: 'quay.io/prometheus/busybox:latest'
                        args:
                          - /bin/sh
                          - '-c'
                          - date; echo Hello from the Kubernetes cluster

The PlacementRefs uses the Rollout Strategy API to apply the manifestWork to the selected clusters. In the example above; the placementRefs refers to three placements; placement-rollout-all, placement-rollout-progressive and placement-rollout-progressive-per-group. For more info regards the rollout strategies check the Rollout Strategy section at the placement document. Note: The placement reference must be in the same namespace as the manifestWorkReplicaSet.

Status tracking

The ManifestWorkReplicaSet example above refers to three placements each one will have its placementSummary in ManifestWorkReplicaSet status. The PlacementSummary shows the number of manifestWorks applied to the placement’s clusters based on the placementRef’s rolloutStrategy and total number of clusters. The manifestWorkReplicaSet Summary aggregate the placementSummaries showing the total number of applied manifestWorks to all clusters.

The manifestWorkReplicaSet has three status conditions;

  1. PlacementVerified verify the placementRefs status; not exist or empty cluster selection.
  2. PlacementRolledOut verify the rollout strategy status; progressing or complete.
  3. ManifestWorkApplied verify the created manifestWork status; applied, progressing, degraded or available.

The manifestWorkReplicaSet determine the ManifestWorkApplied condition status based on the resource state (applied or available) of each manifestWork.

Here is an example.

apiVersion: work.open-cluster-management.io/v1alpha1
kind: ManifestWorkReplicaSet
metadata:
  name: mwrset-cronjob
  namespace: ocm-ns
spec:
  placementRefs:
    - name: placement-rollout-all
      ...
    - name: placement-rollout-progressive
      ...
    - name: placement-rollout-progressive-per-group
      ...
  manifestWorkTemplate:
     ...
status:
 conditions:
   - lastTransitionTime: '2023-04-27T02:30:54Z'
     message: ''
     reason: AsExpected
     status: 'True'
     type: PlacementVerified
   - lastTransitionTime: '2023-04-27T02:30:54Z'
     message: ''
     reason: Progressing
     status: 'False'
     type: PlacementRolledOut
   - lastTransitionTime: '2023-04-27T02:30:54Z'
     message: ''
     reason: AsExpected
     status: 'True'
     type: ManifestworkApplied
 placementSummary:
 - name: placement-rollout-all
   availableDecisionGroups: 1 (10 / 10 clusters applied)
   summary:
     applied: 10
     available: 10
     progressing: 0
     degraded: 0
     total: 10
 - name: placement-rollout-progressive
   availableDecisionGroups: 3 (20 / 30 clusters applied)
   summary:
     applied: 20
     available: 20
     progressing: 0
     degraded: 0
     total: 20
 - name: placement-rollout-progressive-per-group
   availableDecisionGroups: 4 (15 / 20 clusters applied)
   summary:
     applied: 15
     available: 15
     progressing: 0
     degraded: 0
     total: 15
 summary:
   applied: 45
   available: 45
   progressing: 0
   degraded: 0
   total: 45

Release and Enable Feature

ManifestWorkReplicaSet is in alpha release and it is not enabled by default. In order to enable the ManifestWorkReplicaSet feature, it has to be enabled in the cluster-manager instance in the hub. Use the following command to edit the cluster-manager CR (custom resource) in the hub cluster.

$ oc edit ClusterManager cluster-manager

Add the workConfiguration field to the cluster-manager CR as below and save.

kind: ClusterManager
metadata:
  name: cluster-manager
spec:
   ...
  workConfiguration:
    featureGates:
    - feature: ManifestWorkReplicaSet
      mode: Enable

In order to assure the ManifestWorkReplicaSet has been enabled successfully check the cluster-manager using the command below

$ oc get ClusterManager cluster-manager -o yml

You should find under the status->generation the cluster-manager-work-controller deployment has been added as below

kind: ClusterManager
metadata:
  name: cluster-manager
spec:
   ...
status:
   ...
  generations:
    ...
  - group: apps
    lastGeneration: 2
    name: cluster-manager-work-webhook
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    lastGeneration: 1
    name: cluster-manager-work-controller
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1

8 - Add-ons

What is an add-on?

Open-cluster-management has a built-in mechanism named addon-framework to help developers to develop an extension based on the foundation components for the purpose of working with multiple clusters in custom cases. A typical addon should consist of two kinds of components:

  • Addon Agent: A kubernetes controller in the managed cluster that manages the managed cluster for the hub admins. A typical addon agent is expected to be working by subscribing the prescriptions (e.g. in forms of CustomResources) from the hub cluster and then consistently reconcile the state of the managed cluster like an ordinary kubernetes operator does.

  • Addon Manager: A kubernetes controller in the hub cluster that applies manifests to the managed clusters via the ManifestWork api. In addition to resource dispatching, the manager can optionally manage the lifecycle of CSRs for the addon agents or even the RBAC permission bond to the CSRs’ requesting identity.

In general, if a management tool working inside the managed cluster needs to discriminate configuration for each managed cluster, it will be helpful to model its implementation as a working addon agent. The configurations for each agent are supposed to be persisted in the hub cluster, so the hub admin will be able to prescribe the agent to do its job in a declarative way. In abstraction, via the addon we will be decoupling a multi-cluster control plane into (1) strategy dispatching and (2) execution. The addon manager doesn’t actually apply any changes directly to the managed cluster, instead it just places its prescription to a dedicated namespace allocated for the accepted managed cluster. Then the addon agent pulls the prescriptions consistently and does the execution.

In addition to dispatching configurations before the agents, the addon manager will be automatically doing some fiddly preparation before the agent bootstraps, such as:

  • CSR applying, approving and signing.
  • Injecting and managing client credentials used by agents to access the hub cluster.
  • The RBAC permission for the agents both in the hub cluster or the managed cluster.
  • Installing strategy.

Architecture

The following architecture graph shows how the coordination between addon manager and addon agent works.

Addon Architecture

Add-on enablement

From a user’s perspective, to install the addon to the hub cluster the hub admin should register a globally-unique ClusterManagementAddon resource as a singleton placeholder in the hub cluster. For instance, the helloworld add-on can be registered to the hub cluster by creating:

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ClusterManagementAddOn
metadata:
  name: helloworld
spec:
  addOnMeta:
    displayName: helloworld

Enable the add-on manually

The addon manager running on the hub is taking responsibility of configuring the installation of addon agents for each managed cluster. When a user wants to enable the add-on for a certain managed cluster, the user should create a ManagedClusterAddOn resource on the cluster namespace. The name of the ManagedClusterAddOn should be the same name of the corresponding ClusterManagementAddon. For instance, the following example enables helloworld add-on in “cluster1”:

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: helloworld
  namespace: cluster1
spec:
  installNamespace: helloworld

Enable the add-on automatically

If the addon is developed with automatic installation, which support auto-install by cluster discovery, then the ManagedClusterAddOn will be created for all managed cluster namespaces automatically, or be created for the selected managed cluster namespaces automatically.

Enable the add-on by install strategy

If the addon is developed following the guidelines mentioned in managing the add-on agent lifecycle by addon-manager, the user can define an installStrategy in the ClusterManagementAddOn to specify on which clusters the ManagedClusterAddOn should be enabled. Details see install strategy.

Add-on healthiness

The healthiness of the addon instances are visible when we list the addons via kubectl:

$ kubectl get managedclusteraddon -A
NAMESPACE   NAME                     AVAILABLE   DEGRADED   PROGRESSING
<cluster>   <addon>                  True

The addon agent are expected to report its healthiness periodically as long as it’s running. Also the versioning of the addon agent can be reflected in the resources optionally so that we can control the upgrading the agents progressively.

Clean the add-ons

Last but not least, a neat uninstallation of the addon is also supported by simply deleting the corresponding ClusterManagementAddon resource from the hub cluster which is the “root” of the whole addon. The OCM platform will automatically sanitize the hub cluster for you after the uninstalling by removing all the components either in the hub cluster or in the manage clusters.

Add-on lifecycle management

Install strategy

InstallStrategy represents that related ManagedClusterAddOns should be installed on certain clusters. For example, the following example enables the helloworld add-on on clusters with the aws label.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ClusterManagementAddOn
metadata:
  name: helloworld
  annotations:
    addon.open-cluster-management.io/lifecycle: "addon-manager"
spec:
  addOnMeta:
    displayName: helloworld
  installStrategy:
    type: Placements
    placements:
    - name: placement-aws
      namespace: default
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: placement-aws
  namespace: default
spec:
  predicates:
    - requiredClusterSelector:
        claimSelector:
          matchExpressions:
            - key: platform.open-cluster-management.io
              operator: In
              values:
                - aws

Rollout strategy

With the rollout strategy defined in the ClusterManagementAddOn API, users can control the upgrade behavior of the addon when there are changes in the configurations.

For example, if the add-on user updates the “deploy-config” and wants to apply the change to the add-ons to a “canary” decision group first. If all the add-on upgrade successfully, then upgrade the rest of clusters progressively per cluster at a rate of 25%. The rollout strategy can be defined as follows:

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ClusterManagementAddOn
metadata:
  name: helloworld
  annotations:
    addon.open-cluster-management.io/lifecycle: "addon-manager"
spec:
  addOnMeta:
    displayName: helloworld
  installStrategy:
    type: Placements
    placements:
    - name: placement-aws
      namespace: default
      configs:
      - group: addon.open-cluster-management.io
        resource: addondeploymentconfigs
        name: deploy-config
        namespace: open-cluster-management
      rolloutStrategy:
        type: Progressive
        progressive:
          mandatoryDecisionGroups:
          - groupName: "prod-canary-west"
          - groupName: "prod-canary-east"
          maxConcurrency: 25%
          minSuccessTime: 5m
          progressDeadline: 10m
          maxFailures: 2

In the above example with type Progressive, once user updates the “deploy-config”, controller will rollout on the clusters in mandatoryDecisionGroups first, then rollout on the other clusters with the rate defined in maxConcurrency.

  • minSuccessTime is a “soak” time, means the controller will wait for 5 minutes when a cluster reach a successful state and maxFailures isn’t breached. If, after this 5 minutes interval, the workload status remains successful, the rollout progresses to the next.
  • progressDeadline means the controller will wait for a maximum of 10 minutes for the workload to reach a successful state. If, the workload fails to achieve success within 10 minutes, the controller stops waiting, marking the workload as “timeout,” and includes it in the count of maxFailures.
  • maxFailures means the controller can tolerate update to 2 clusters with failed status, once maxFailures is breached, the rollout will stop.

Currently add-on supports 3 types of rolloutStrategy, they are All, Progressive and ProgressivePerGroup, for more info regards the rollout strategies check the Rollout Strategy document.

Add-on configurations

Default configurations

In ClusterManagementAddOn, spec.supportedConfigs is a list of configuration types supported by the add-on. defaultConfig represents the namespace and name of the default add-on configuration. In scenarios where all add-ons have the same configuration. Only one configuration of the same group and resource can be specified in the defaultConfig.

In the example below, add-ons on all the clusters will use “default-deploy-config” and “default-example-config”.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ClusterManagementAddOn
metadata:
  name: helloworld
  annotations:
    addon.open-cluster-management.io/lifecycle: "addon-manager"
spec:
  addOnMeta:
    displayName: helloworld
  supportedConfigs:
  - defaultConfig:
      name: default-deploy-config
      namespace: open-cluster-management
    group: addon.open-cluster-management.io
    resource: addondeploymentconfigs
  - defaultConfig:
      name: default-example-config
      namespace: open-cluster-management
    group: example.open-cluster-management.io
    resource: exampleconfigs

Configurations per install strategy

In ClusterManagementAddOn, spec.installStrategy.placements[].configs lists the configuration of ManagedClusterAddon during installation for a group of clusters. For the need to use multiple configurations with the same group and resource can be defined in this field since OCM v0.15.0. It will override the Default configurations on certain clusters by group and resource.

In the example below, add-ons on clusters selected by Placement placement-aws will use “deploy-config”, “example-config-1” and “example-config-2”, while all the other add-ons will still use “default-deploy-config” and “default-example-config”.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ClusterManagementAddOn
metadata:
  name: helloworld
  annotations:
    addon.open-cluster-management.io/lifecycle: "addon-manager"
spec:
  addOnMeta:
    displayName: helloworld
  supportedConfigs:
  - defaultConfig:
      name: default-deploy-config
      namespace: open-cluster-management
    group: addon.open-cluster-management.io
    resource: addondeploymentconfigs
  installStrategy:
    type: Placements
    placements:
    - name: placement-aws
      namespace: default
      configs:
      - group: addon.open-cluster-management.io
        resource: addondeploymentconfigs
        name: deploy-config
        namespace: open-cluster-management
      - group: example.open-cluster-management.io
        resource: exampleconfigs
        name: example-config-1
        namespace: open-cluster-management
      - group: example.open-cluster-management.io
        resource: exampleconfigs
        name: example-config-2
        namespace: open-cluster-management

Configurations per cluster

In ManagedClusterAddOn, spec.configs is a list of add-on configurations. In scenarios where the current add-on has its own configurations. It also supports defining multiple configurations with the same group and resource since OCM v0.15.0. It will override the Default configurations and Configurations per install strategy defined in ClusterManagementAddOn by group and resource.

In the below example, add-on on cluster1 will use “cluster1-deploy-config” and “cluster1-example-config”.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: helloworld
  namespace: cluster1
spec:
  configs:
  - group: addon.open-cluster-management.io
    resource: addondeploymentconfigs
    name: cluster1-deploy-config
    namespace: open-cluster-management
  - group: example.open-cluster-management.io
    resource: exampleconfigs
    name: cluster1-example-config
    namespace: open-cluster-management

Supported configurations

Supported configurations is a list of configuration types that are allowed to override the add-on configurations defined in ClusterManagementAddOn spec. They are listed in the ManagedClusterAddon status.supportedConfigs, for example:

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: helloworld
  namespace: cluster1
spec:
...
status:
...
  supportedConfigs:
  - group: addon.open-cluster-management.io
    resource: addondeploymentconfigs
  - group: example.open-cluster-management.io
    resource: exampleconfigs

Effective configurations

As the above described, there are 3 places to define the add-on configurations, they have an override order and eventually only one takes effect. The final effective configurations are listed in the ManagedClusterAddOn status.configReferences.

  • desiredConfig record the desired config and it’s spec hash.
  • lastAppliedConfig record the config when the corresponding ManifestWork is applied successfully.

For example:

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: helloworld
  namespace: cluster1
...
status:
...
  configReferences:
  - desiredConfig:
      name: cluster1-deploy-config
      namespace: open-cluster-management
      specHash: dcf88f5b11bd191ed2f886675f967684da8b5bcbe6902458f672277d469e2044
    group: addon.open-cluster-management.io
    lastAppliedConfig:
      name: cluster1-deploy-config
      namespace: open-cluster-management
      specHash: dcf88f5b11bd191ed2f886675f967684da8b5bcbe6902458f672277d469e2044
    lastObservedGeneration: 1
    name: cluster1-deploy-config
    resource: addondeploymentconfigs

Examples

Here’s a few examples of cases where we will need add-ons:

  1. A tool to collect alert events in the managed cluster, and send to the hub cluster.
  2. A network solution that uses the hub to share the network info and establish connection among managed clusters. See cluster-proxy
  3. A tool to spread security policies to multiple clusters.

Add-on framework

Add-on framework provides a library for developers to develop an add-ons in open-cluster-management more easily. Take a look at the helloworld example to understand how the add-on framework can be used.

Custom signers

The original Kubernetes CSR api only supports three built-in signers:

  • “kubernetes.io/kube-apiserver-client”
  • “kubernetes.io/kube-apiserver-client-kubelet”
  • “kubernetes.io/kubelet-serving”

However in some cases, we need to sign additional custom certificates for the addon agents which is not used for connecting any kube-apiserver. The addon manager can be serving as a custom CSR signer controller based on the addon-framework’s extensibility by implementing the signing logic. Note that after successfully signing the certificates, the framework will also keep rotating the certificates automatically for the addon.

Hub credential injection

The addon manager developed base on addon-framework will automatically persist the signed certificates as secret resource to the managed clusters after signed by either original Kubernetes CSR controller or custom signers. The injected secrets will be:

  • For “kubernetes.io/kube-apiserver-client” signer, the name will be “ -hub-kubeconfig” with properties:
    • “kubeconfig”: a kubeconfig file for accessing hub cluster with the addon’s identity.
    • “tls.crt”: the signed certificate.
    • “tls.key”: the private key.
  • For custom signer, the name will be “--client-cert” with properties:
    • “tls.crt”: the signed certificate.
    • “tls.key”: the private key.

Auto-install by cluster discovery

The addon manager can automatically install an addon to the managed clusters upon discovering new clusters by setting the InstallStrategy from the addon-framework. On the other hand, the admin can also manually install the addon for the clusters by applying ManagedClusterAddOn into their cluster namespace.

9 - Policy

Overview

Note: this is also covered in the Open Cluster Management - Configuring Your Kubernetes Fleet With the Policy Addon video.

Open Cluster Management - Configuring Your Kubernetes Fleet With the Policy Addon

The policy framework has the following API concepts:

  • Policy Templates are the policies that perform a desired check or action. For example, ConfigurationPolicy objects are embedded in Policy objects under the policy-templates array.
  • A Policy is a grouping mechanism for Policy Templates and is the smallest deployable unit on the hub cluster. Embedded Policy Templates are distributed to applicable managed clusters and acted upon by the appropriate policy controller.
  • A PolicySet is a grouping mechanism of Policy objects. Compliance of all grouped Policy objects is summarized in the PolicySet. A PolicySet is a deployable unit and its distribution is controlled by a Placement.
  • A PlacementBinding binds a Placement to a Policy or PolicySet.

The second half of the KubeCon NA 2022 - OCM Multicluster App & Config Management also covers an overview of the Policy addon.

Policy

A Policy is a grouping mechanism for Policy Templates and is the smallest deployable unit on the hub cluster. Embedded Policy Templates are distributed to applicable managed clusters and acted upon by the appropriate policy controller. The compliance state and status of a Policy represents all embedded Policy Templates in the Policy. The distribution of Policy objects is controlled by a Placement.

View a simple example of a Policy that embeds a ConfigurationPolicy policy template to manage a namespace called “prod”.

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name: policy-namespace
  namespace: policies
  annotations:
    policy.open-cluster-management.io/standards: NIST SP 800-53
    policy.open-cluster-management.io/categories: CM Configuration Management
    policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
spec:
  remediationAction: enforce
  disabled: false
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: policy-namespace-example
        spec:
          remediationAction: inform
          severity: low
          object-templates:
            - complianceType: musthave
              objectDefinition:
                kind: Namespace # must have namespace 'prod'
                apiVersion: v1
                metadata:
                  name: prod

The annotations are standard annotations for informational purposes and can be used by user interfaces, custom report scripts, or components that integrate with OCM.

The optional spec.remediationAction field dictates whether the policy controller should inform or enforce when violations are found and overrides the remediationAction field on each policy template. When set to inform, the Policy will become noncompliant if the underlying policy templates detect that the desired state is not met. When set to enforce, the policy controller applies the desired state when necessary and feasible.

The policy-templates array contains an array of Policy Templates. Here a single ConfigurationPolicy called policy-namespace-example defines a Namespace manifest to compare with objects on the cluster. It has the remediationAction set to inform but it is overridden by the optional global spec.remediationAction. The severity is for informational purposes similar to the annotations.

Inside of the embedded ConfigurationPolicy, the object-templates section describes the prod Namespace object that the ConfigurationPolicy applies to. The action that the ConfigurationPolicy will take is determined by the complianceType. In this case, it is set to musthave which means the prod Namespace object will be created if it doesn’t exist. Other compliance types include mustnothave and mustonlyhave. mustnothave would delete the prod Namespace object. mustonlyhave would ensure the prod Namespace object only exists with the fields defined in the ConfigurationPolicy. See the ConfigurationPolicy page for more information or see the templating in configuration policies topic for advanced templating use cases with ConfigurationPolicy.

When the Policy is bound to a Placement using a PlacementBinding, the Policy status will report on each cluster that matches the bound Placement:

status:
  compliant: Compliant
  placement:
    - placement: placement-hub-cluster
      placementBinding: binding-policy-namespace
  status:
    - clustername: local-cluster
      clusternamespace: local-cluster
      compliant: Compliant

To fully explore the Policy API, run the following command:

kubectl get crd policies.policy.open-cluster-management.io -o yaml

To fully explore the ConfigurationPolicy API, run the following command:

kubectl get crd configurationpolicies.policy.open-cluster-management.io -o yaml

PlacementBinding

A PlacementBinding binds a Placement to a Policy or PolicySet.

Below is an example of a PlacementBinding that binds the policy-namespace Policy to the placement-hub-cluster Placement.

apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
  name: binding-policy-namespace
  namespace: policies
placementRef:
  apiGroup: cluster.open-cluster-management.io
  kind: Placement
  name: placement-hub-cluster
subjects:
  - apiGroup: policy.open-cluster-management.io
    kind: Policy
    name: policy-namespace

Once the Policy is bound, it will be distributed to and acted upon by the managed clusters that match the Placement.

PolicySet

A PolicySet is a grouping mechanism of Policy objects. Compliance of all grouped Policy objects is summarized in the PolicySet. A PolicySet is a deployable unit and its distribution is controlled by a Placement when bound through a PlacementBinding.

This enables a workflow where subject matter experts write Policy objects and then an IT administrator creates a PolicySet that groups the previously written Policy objects and binds the PolicySet to a Placement that deploys the PolicySet.

An example of a PolicySet is shown below.

apiVersion: policy.open-cluster-management.io/v1beta1
kind: PolicySet
metadata:
  name: ocm-hardening
  namespace: policies
spec:
  description: Apply standard best practices for hardening your Open Cluster Management installation.
  policies:
    - policy-check-backups
    - policy-managedclusteraddon-available
    - policy-subscriptions

Managed cluster policy controllers

The Policy on the hub delivers the policies defined in spec.policy-templates to the managed clusters via the policy framework controllers. Once on the managed cluster, these Policy Templates are acted upon by the associated controller on the managed cluster. The policy framework supports delivering the Policy Template kinds listed here:

  • Configuration policy

    The ConfigurationPolicy is provided by OCM and defines Kubernetes manifests to compare with objects that currently exist on the cluster. The action that the ConfigurationPolicy will take is determined by its complianceType. Compliance types include musthave, mustnothave, and mustonlyhave. musthave means the object should have the listed keys and values as a subset of the larger object. mustnothave means an object matching the listed keys and values should not exist. mustonlyhave ensures objects only exist with the keys and values exactly as defined. See the page on Configuration Policy for more information.

  • Open Policy Agent Gatekeeper

    Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper ConstraintTemplates and constraints can be provided in an OCM Policy to sync to managed clusters that have Gatekeeper installed on them. See the page on Gatekeeper integration for more information.

Templating in configuration policies

Configuration policies support the inclusion of Golang text templates in the object definitions. These templates are resolved at runtime either on the hub cluster or the target managed cluster using configurations related to that cluster. This gives you the ability to define configuration policies with dynamic content and to inform or enforce Kubernetes resources that are customized to the target cluster.

The template syntax must follow the Golang template language specification, and the resource definition generated from the resolved template must be a valid YAML. (See the Golang documentation about package templates for more information.) Any errors in template validation appear as policy violations. When you use a custom template function, the values are replaced at runtime.

Template functions, such as resource-specific and generic lookup template functions, are available for referencing Kubernetes resources on the hub cluster (using the {{hub ... hub}} delimiters), or managed cluster (using the {{ ... }} delimiters). See the Hub cluster templates section for more details. The resource-specific functions are used for convenience and makes content of the resources more accessible. If you use the generic function, lookup, which is more advanced, it is best to be familiar with the YAML structure of the resource that is being looked up. In addition to these functions, utility functions like base64encode, base64decode, indent, autoindent, toInt, and toBool are also available.

To conform templates with YAML syntax, templates must be set in the policy resource as strings using quotes or a block character (| or >). This causes the resolved template value to also be a string. To override this, consider using toInt or toBool as the final function in the template to initiate further processing that forces the value to be interpreted as an integer or boolean respectively.

To bypass template processing you can either:

  • Override a single template by wrapping the template in additional braces. For example, the template {{ template content }} would become {{ '{{ template content }}' }}.
  • Override all templates in a ConfigurationPolicy by adding the policy.open-cluster-management.io/disable-templates: "true" annotation in the ConfigurationPolicy section of your Policy. Template processing will be bypassed for that ConfigurationPolicy.

Hub cluster templating in configuration policies

Hub cluster templates are used to define configuration policies that are dynamically customized to the target cluster. This reduces the need to create separate policies for each target cluster or hardcode configuration values in the policy definitions.

Hub cluster templates are based on Golang text template specifications, and the {{hub … hub}} delimiter indicates a hub cluster template in a configuration policy.

A configuration policy definition can contain both hub cluster and managed cluster templates. Hub cluster templates are processed first on the hub cluster, then the policy definition with resolved hub cluster templates is propagated to the target clusters. On the managed cluster, the Configuration Policy controller processes any managed cluster templates in the policy definition and then enforces or verifies the fully resolved object definition.

In OCM versions 0.9.x and older, policies are processed on the hub cluster only upon creation or after an update. Therefore, hub cluster templates are only resolved to the data in the referenced resources upon policy creation or update. Any changes to the referenced resources are not automatically synced to the policies.

A special annotation, policy.open-cluster-management.io/trigger-update can be used to indicate changes to the data referenced by the templates. Any change to the special annotation value initiates template processing, and the latest contents of the referenced resource are read and updated in the policy definition that is the propagator for processing on managed clusters. A typical way to use this annotation is to increment the value by one each time.

Templating value encryption

The encryption algorithm uses AES-CBC with 256-bit keys. Each encryption key is unique per managed cluster and is automatically rotated every 30 days. This ensures that your decrypted value is never stored in the policy on the managed cluster.

To force an immediate encryption key rotation, delete the policy.open-cluster-management.io/last-rotated annotation on the policy-encryption-key Secret in the managed cluster namespace on the hub cluster. Policies are then reprocessed to use the new encryption key.

Templating functions

FunctionDescriptionSample
fromSecretReturns the value of the given data key in the secret.PASSWORD: '{{ fromSecret "default" "localsecret" "PASSWORD" }}'
fromConfigmapReturns the value of the given data key in the ConfigMap.log-file: '{{ fromConfigMap "default" "logs-config" "log-file" }}'
fromClusterClaimReturns the value of spec.value in the ClusterClaim resource.platform: '{{ fromClusterClaim "platform.open-cluster-management.io" }}'
lookupReturns the Kubernetes resource as a JSON compatible map. Note that if the requested resource does not exist, an empty map is returned.metrics-url: |
http://{{ (lookup "v1" "Service" "default" "metrics").spec.clusterIP }}:8080
base64encReturns a base64 encoded value of the input string.USER_NAME: '{{ fromConfigMap "default" "myconfigmap" "admin-user" | base64enc }}'
base64decReturns a base64 decoded value of the input string.app-name: |
"{{ ( lookup "v1" "Secret" "testns" "mytestsecret") .data.appname ) | base64dec }}"
indentReturns the input string indented by the given number of spaces.Ca-cert: |
{{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | indent 4 }}
autoindentActs like the indent function but automatically determines the number of leading spaces needed based on the number of spaces before the template.Ca-cert: |
{{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | autoindent }}
toIntReturns the integer value of the string and ensures that the value is interpreted as an integer in the YAML.vlanid: |
{{ (fromConfigMap "site-config" "site1" "vlan") | toInt }}
toBoolReturns the boolean value of the input string and ensures that the value is interpreted as a boolean in the YAML.enabled: |
{{ (fromConfigMap "site-config" "site1" "enabled") | toBool }}
protectEncrypts the input string. It is decrypted when the policy is evaluated. On the replicated policy in the managed cluster namespace, the resulting value resembles the following: $ocm_encrypted:<encrypted-value>enabled: |
{{hub "(lookup "route.openshift.io/v1" "Route" "openshift-authentication" "oauth-openshift").spec.host | protect hub}}

Additionally, OCM supports the following template functions that are included from the sprig open source project:

  • cat
  • contains
  • default
  • empty
  • fromJson
  • hasPrefix
  • hasSuffix
  • join
  • list
  • lower
  • mustFromJson
  • quote
  • replace
  • semver
  • semverCompare
  • split
  • splitn
  • ternary
  • trim
  • until
  • untilStep
  • upper

See the Sprig documentation for more details.

10 - Multicluster Control Plane

What is Multicluster Control Plane

The multicluster control plane is a lightweight Open Cluster Manager (OCM) control plane that is easy to install and has a small footprint. It can be running anywhere with or without a Kubernetes environment to serve the OCM control plane capabilities.

Why use Multicluster Control Plane

  1. Some Kubernetes environments do not have CSR (e.g., EKS) so that the standard OCM control plane cannot be installed. The multicluster control plane can be able to install in these environments and expose the OCM control plane API via loadbalancer.

  2. Some users may want to run multiple OCM control planes to isolate the data. The typical case is that the user wants to run one OCM control plane for production and another OCM control plane for development. The multicluster control plane is able to be installed in different namespaces in a single cluster. Each multicluster control plane is running independently and serving the OCM control plane capabilities.

  3. Some users may want to run the OCM control plane without a Kubernetes environment. The multicluster control plane can run in a standalone mode, for example, running in a VM. Expose the control plane API to the outside so the managed clusters can register to it.

How to use Multicluster Control Plane

Start the standalone multicluster control plane

You need build multicluster-controlplane in your local host. Follow the below steps to build the binary and start the multicluster control plane.

git clone https://github.com/open-cluster-management-io/multicluster-controlplane.git
cd multicluster-controlplane
make run

Once the control plane is running, you can access the control plane by using kubectl --kubeconfig=./_output/controlplane/.ocm/cert/kube-aggregator.kubeconfig.

You can customize the control plane configurations by creating a config file and using the environment variable CONFIG_DIR to specify your config file directory. Please check the repository documentation for details.

Install via clusteradm

Install clusteradm CLI tool

It’s recommended to run the following command to download and install the latest release of the clusteradm command-line tool:

curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash

Install multicluster control plane

You can use clusteradm init to deploy the multicluster control plane in your Kubernetes environment.

  1. Set the environment variable KUBECONFIG to your cluster kubeconfig path. For instance, create a new KinD cluster and deploy multicluster control plane in it.
export KUBECONFIG=/tmp/kind-controlplane.kubeconfig
kind create cluster --name multicluster-controlplane
export mc_cp_node_ip=$(kubectl get nodes -o=jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
  1. Run following command to deploy a control plane
clusteradm init --singleton=true --set route.enabled=false --set nodeport.enabled=true --set nodeport.port=30443 --set apiserver.externalHostname=$mc_cp_node_ip --set apiserver.externalPort=30443 --singleton-name multicluster-controlplane

Refer to the repository documentation for how to customize the control plane configurations.

  1. Get the control plane kubeconfig by running the following command:
kubectl -n multicluster-controlplane get secrets multicluster-controlplane-kubeconfig -ojsonpath='{.data.kubeconfig}' | base64 -d > /tmp/multicluster-controlplane.kubeconfig

Join a cluster to the multicluster control plane

You can use clusteradm to join a cluster. For instance, take the KinD cluster as an example, run the following command to join the cluster to the control plane:

kind create cluster --name cluster1 --kubeconfig /tmp/kind-cluster1.kubeconfig
clusteradm --kubeconfig=/tmp/multicluster-controlplane.kubeconfig get token --use-bootstrap-token
clusteradm --singleton=true --kubeconfig /tmp/kind-cluster1.kubeconfig join --hub-token <controlplane token> --hub-apiserver https://$mc_cp_node_ip:30443/ --cluster-name cluster1
clusteradm --kubeconfig=/tmp/multicluster-controlplane.kubeconfig accept --clusters cluster1

Verify the cluster join

Run this command to verify the cluster join:

kubectl --kubeconfig=/tmp/multicluster-controlplane.kubeconfig get managedcluster
NAME       HUB ACCEPTED   MANAGED CLUSTER URLS                  JOINED   AVAILABLE   AGE
cluster1   true           https://cluster1-control-plane:6443   True     True        5m25s

You should see the managedcluster joins to the multicluster control plane. Congratulations!