Install the core control plane that includes cluster registration and manifests distribution on the hub cluster.
Install the klusterlet agent on the managed cluster so that it can be registered and managed by the hub cluster.
This is the multi-page printable view of this section. Click here to print.
Install the core control plane that includes cluster registration and manifests distribution on the hub cluster.
Install the klusterlet agent on the managed cluster so that it can be registered and managed by the hub cluster.
v1.19+
.
(To run on hub cluster version between [v1.16
, v1.18
],
please manually enable feature gate “V1beta1CSRAPICompatibility”).Configure your network settings for the hub cluster to allow the following connections.
Direction | Endpoint | Protocol | Purpose | Used by |
---|---|---|---|---|
Inbound | https://{hub-api-server-url}:{port} | TCP | Kubernetes API server of the hub cluster | OCM agents, including the add-on agents, running on the managed clusters |
It’s recommended to run the following command to download and install the
latest release of the clusteradm
command-line tool:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
You can also install the latest development version (main branch) by running:
# Installing clusteradm to $GOPATH/bin/
GO111MODULE=off go get -u open-cluster-management.io/clusteradm/...
Before actually installing the OCM components into your clusters, export
the following environment variables in your terminal before running our
command-line tool clusteradm
so that it can correctly discriminate the
hub cluster.
# The context name of the clusters in your kubeconfig
export CTX_HUB_CLUSTER=<your hub cluster context>
Call clusteradm init
:
# By default, it installs the latest release of the OCM components.
# Use e.g. "--bundle-version=latest" to install latest development builds.
# NOTE: For hub cluster version between v1.16 to v1.19 use the parameter: --use-bootstrap-token
clusteradm init --wait --context ${CTX_HUB_CLUSTER}
The clusteradm init
command installs the
registration-operator
on the hub cluster, which is responsible for consistently installing
and upgrading a few core components for the OCM environment.
After the init
command completes, a generated command is output on the console to
register your managed clusters. An example of the generated command is shown below.
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub kube-apiserver endpoint> \
--wait \
--cluster-name <cluster_name>
It’s recommended to save the command somewhere secure for future use. If it’s lost, you can use
clusteradm get token
to get the generated command again.
kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
cluster-manager-695d945d4d-5dn8k 1/1 Running 0 19d
Additionally, to check out the instances of OCM’s hub control plane, run the following command:
kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
cluster-manager-placement-controller-857f8f7654-x7sfz 1/1 Running 0 19d
cluster-manager-registration-controller-85b6bd784f-jbg8s 1/1 Running 0 19d
cluster-manager-registration-webhook-59c9b89499-n7m2x 1/1 Running 0 19d
cluster-manager-work-webhook-59cf7dc855-shq5p 1/1 Running 0 19d
...
The overall installation information is visible on the clustermanager
custom resource:
kubectl get clustermanager cluster-manager -o yaml --context ${CTX_HUB_CLUSTER}
Before uninstalling the OCM components from your clusters, please detach the managed cluster from the control plane.
clusteradm clean --context ${CTX_HUB_CLUSTER}
Check the instances of OCM’s hub control plane are removed.
kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management-hub namespace.
kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management namespace.
Check the clustermanager
resource is removed from the control plane.
kubectl get clustermanager --context ${CTX_HUB_CLUSTER}
error: the server doesn't have a resource type "clustermanager"
After the cluster manager is installed on the hub cluster, you need to install the klusterlet agent on another cluster so that it can be registered and managed by the hub cluster.
Configure your network settings for the managed clusters to allow the following connections.
Direction | Endpoint | Protocol | Purpose | Used by |
---|---|---|---|---|
Outbound | https://{hub-api-server-url}:{port} | TCP | Kubernetes API server of the hub cluster | OCM agents, including the add-on agents, running on the managed clusters |
To use a proxy, please make sure the proxy server is well configured to allow the above connections and the proxy server is reachable for the managed clusters. See Register a cluster to hub through proxy server for more details.
It’s recommended to run the following command to download and install the
latest release of the clusteradm
command-line tool:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
You can also install the latest development version (main branch) by running:
# Installing clusteradm to $GOPATH/bin/
GO111MODULE=off go get -u open-cluster-management.io/clusteradm/...
Before actually installing the OCM components into your clusters, export
the following environment variables in your terminal before running our
command-line tool clusteradm
so that it can correctly discriminate the managed cluster:
# The context name of the clusters in your kubeconfig
export CTX_HUB_CLUSTER=<your hub cluster context>
export CTX_MANAGED_CLUSTER=<your managed cluster context>
Copy the previously generated command – clusteradm join
, and add the arguments respectively based
on the different distribution.
NOTE: If there is no configmap kube-root-ca.crt
in kube-public namespace of the hub cluster,
the flag –ca-file should be set to provide a valid hub ca file to help set
up the external client.
# NOTE: For KinD clusters use the parameter: --force-internal-endpoint-lookup
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--force-internal-endpoint-lookup \
--context ${CTX_MANAGED_CLUSTER}
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--context ${CTX_MANAGED_CLUSTER}
Using the above command, the klusterlet components(registration-agent and work-agent) will be deployed on the managed cluster, it is mandatory to expose the hub cluster to the managed cluster. We provide an option for running the klusterlet components outside the managed cluster, for example, on the hub cluster(hosted mode).
The hosted mode deploying is till in experimental stage, consider to use it only when:
In hosted mode, the cluster where the klusterlet is running is called the hosting cluster. Running the following command to the hosting cluster to register the managed cluster to the hub.
# NOTE for KinD clusters:
# 1. hub is KinD, use the parameter: --force-internal-endpoint-lookup
# 2. managed is Kind, --managed-cluster-kubeconfig should be internal: `kind get kubeconfig --name managed --internal`
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--mode hosted \
--managed-cluster-kubeconfig <your managed cluster kubeconfig> \ # Should be an internal kubeconfig
--force-internal-endpoint-lookup \
--context <your hosting cluster context>
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--mode hosted \
--managed-cluster-kubeconfig <your managed cluster kubeconfig> \
--context <your hosting cluster context>
To reduce the footprint of agent in the managed cluster, singleton mode is introduced since v0.12.0
.
In the singleton mode, the work and registration agent will be run as a single pod in the managed
cluster.
Note: to run klusterlet in singleton mode, you must have a clusteradm version equal or higher than
v0.12.0
# NOTE: For KinD clusters use the parameter: --force-internal-endpoint-lookup
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--singleton \
--force-internal-endpoint-lookup \
--context ${CTX_MANAGED_CLUSTER}
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--singleton \
--context ${CTX_MANAGED_CLUSTER}
After the OCM agent is running on your managed cluster, it will be sending a “handshake” to your hub cluster and waiting for an approval from the hub cluster admin. In this section, we will walk through accepting the registration requests from the perspective of an OCM’s hub admin.
Wait for the creation of the CSR object which will be created by your managed clusters’ OCM agents on the hub cluster:
kubectl get csr -w --context ${CTX_HUB_CLUSTER} | grep cluster1 # or the previously chosen cluster name
An example of a pending CSR request is shown below:
cluster1-tqcjj 33s kubernetes.io/kube-apiserver-client system:serviceaccount:open-cluster-management:cluster-bootstrap Pending
Accept the join request using the clusteradm
tool:
clusteradm accept --clusters cluster1 --context ${CTX_HUB_CLUSTER}
After running the accept
command, the CSR from your managed cluster
named “cluster1” will be approved. Additionally, it will instruct
the OCM hub control plane to setup related objects (such as a namespace
named “cluster1” in the hub cluster) and RBAC permissions automatically.
Verify the installation of the OCM agents on your managed cluster by running:
kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
NAME READY STATUS RESTARTS AGE
klusterlet-registration-agent-598fd79988-jxx7n 1/1 Running 0 19d
klusterlet-work-agent-7d47f4b5c5-dnkqw 1/1 Running 0 19d
Verify that the cluster1
ManagedCluster
object was created successfully by running:
kubectl get managedcluster --context ${CTX_HUB_CLUSTER}
Then you should get a result that resembles the following:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
cluster1 true <your endpoint> True True 5m23s
If the managed cluster status is not true, refer to Troubleshooting to debug on your cluster.
After the managed cluster is registered, test that you can deploy a pod to the managed cluster from the hub cluster. Create a manifest-work.yaml
as shown in this example:
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
name: mw-01
namespace: ${MANAGED_CLUSTER_NAME}
spec:
workload:
manifests:
- apiVersion: v1
kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: busybox
command: ["sh", "-c", 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
Apply the yaml file to the hub cluster.
kubectl apply -f manifest-work.yaml --context ${CTX_HUB_CLUSTER}
Verify that the manifestwork
resource was applied to the hub.
kubectl -n ${MANAGED_CLUSTER_NAME} get manifestwork/mw-01 --context ${CTX_HUB_CLUSTER} -o yaml
Check on the managed cluster and see the hello Pod has been deployed from the hub cluster.
$ kubectl -n default get pod --context ${CTX_MANAGED_CLUSTER}
NAME READY STATUS RESTARTS AGE
hello 1/1 Running 0 108s
If the managed cluster status is not true.
For example, the result below is shown when checking managedcluster.
$ kubectl get managedcluster --context ${CTX_HUB_CLUSTER}
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
${MANAGED_CLUSTER_NAME} true https://localhost Unknown 46m
There are many reasons for this problem. You can use the commands below to get more debug info. If the provided info doesn’t help, please log an issue to us.
On the hub cluster, check the managedcluster status.
kubectl get managedcluster ${MANAGED_CLUSTER_NAME} --context ${CTX_HUB_CLUSTER} -o yaml
On the hub cluster, check the lease status.
kubectl get lease -n ${MANAGED_CLUSTER_NAME} --context ${CTX_HUB_CLUSTER}
On the managed cluster, check the klusterlet status.
kubectl get klusterlet -o yaml --context ${CTX_MANAGED_CLUSTER}
Remove the resources generated when registering with the hub cluster.
clusteradm unjoin --cluster-name "cluster1" --context ${CTX_MANAGED_CLUSTER}
Check the installation of the OCM agent is removed from the managed cluster.
kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
No resources found in open-cluster-management-agent namespace.
Check the klusterlet is removed from the managed cluster.
kubectl get klusterlet --context ${CTX_MANAGED_CLUSTER}
error: the server doesn't have a resource type "klusterlet