This is the multi-page printable view of this section. Click here to print.
Getting Started
1 - Quick Start
Prerequisites
- Ensure kubectl and kustomize are installed.
- Ensure kind(greater than
v0.9.0+
, or the latest version is preferred) is installed.
Install clusteradm CLI tool
Run the following command to download and install the latest clusteradm
command-line tool:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
Setup hub and managed cluster
Run the following command to quickly setup a hub cluster and 2 managed clusters by kind.
curl -L https://raw.githubusercontent.com/open-cluster-management-io/OCM/main/solutions/setup-dev-environment/local-up.sh | bash
If you want to setup OCM in a production environment or on a different kubernetes distribution, please refer to the Start the control plane and Register a cluster guides.
What is next
Now you have the OCM control plane with 2 managed clusters connected! Let’s start your OCM journey.
- Deploy kubernetes resources onto a managed cluster
- Visit kubernetes apiserver of managedcluster from cluster-proxy
- Visit integration to check if any certain OCM addon will meet your use cases.
- Use the OCM VScode Extension to easily generate OCM related Kubernetes resources and track your cluster
2 - Installation
Install the core control plane that includes cluster registration and manifests distribution on the hub cluster.
Install the klusterlet agent on the managed cluster so that it can be registered and managed by the hub cluster.
2.1 - Start the control plane
Prerequisite
- The hub cluster should be
v1.19+
. (To run on hub cluster version between [v1.16
,v1.18
], please manually enable feature gate “V1beta1CSRAPICompatibility”). - Currently the bootstrap process relies on client authentication via CSR. Therefore, if your Kubernetes distributions (like EKS) don’t support it, you can choose the multicluster-controlplane as the hub controlplane.
- Ensure kubectl and kustomize are installed.
Network requirements
Configure your network settings for the hub cluster to allow the following connections.
Direction | Endpoint | Protocol | Purpose | Used by |
---|---|---|---|---|
Inbound | https://{hub-api-server-url}:{port} | TCP | Kubernetes API server of the hub cluster | OCM agents, including the add-on agents, running on the managed clusters |
Install clusteradm CLI tool
It’s recommended to run the following command to download and install the
latest release of the clusteradm
command-line tool:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
You can also install the latest development version (main branch) by running:
# Installing clusteradm to $GOPATH/bin/
GO111MODULE=off go get -u open-cluster-management.io/clusteradm/...
Bootstrap a cluster manager
Before actually installing the OCM components into your clusters, export
the following environment variables in your terminal before running our
command-line tool clusteradm
so that it can correctly discriminate the
hub cluster.
# The context name of the clusters in your kubeconfig
export CTX_HUB_CLUSTER=<your hub cluster context>
Call clusteradm init
:
# By default, it installs the latest release of the OCM components.
# Use e.g. "--bundle-version=latest" to install latest development builds.
# NOTE: For hub cluster version between v1.16 to v1.19 use the parameter: --use-bootstrap-token
clusteradm init --wait --context ${CTX_HUB_CLUSTER}
The clusteradm init
command installs the
registration-operator
on the hub cluster, which is responsible for consistently installing
and upgrading a few core components for the OCM environment.
After the init
command completes, a generated command is output on the console to
register your managed clusters. An example of the generated command is shown below.
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub kube-apiserver endpoint> \
--wait \
--cluster-name <cluster_name>
It’s recommended to save the command somewhere secure for future use. If it’s lost, you can use
clusteradm get token
to get the generated command again.
Check out the running instances of the control plane
kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
cluster-manager-695d945d4d-5dn8k 1/1 Running 0 19d
Additionally, to check out the instances of OCM’s hub control plane, run the following command:
kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
cluster-manager-placement-controller-857f8f7654-x7sfz 1/1 Running 0 19d
cluster-manager-registration-controller-85b6bd784f-jbg8s 1/1 Running 0 19d
cluster-manager-registration-webhook-59c9b89499-n7m2x 1/1 Running 0 19d
cluster-manager-work-webhook-59cf7dc855-shq5p 1/1 Running 0 19d
...
The overall installation information is visible on the clustermanager
custom resource:
kubectl get clustermanager cluster-manager -o yaml --context ${CTX_HUB_CLUSTER}
Uninstall the OCM from the control plane
Before uninstalling the OCM components from your clusters, please detach the managed cluster from the control plane.
clusteradm clean --context ${CTX_HUB_CLUSTER}
Check the instances of OCM’s hub control plane are removed.
kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management-hub namespace.
kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management namespace.
Check the clustermanager
resource is removed from the control plane.
kubectl get clustermanager --context ${CTX_HUB_CLUSTER}
error: the server doesn't have a resource type "clustermanager"
2.2 - Register a cluster
After the cluster manager is installed on the hub cluster, you need to install the klusterlet agent on another cluster so that it can be registered and managed by the hub cluster.
Prerequisite
Network requirements
Configure your network settings for the managed clusters to allow the following connections.
Direction | Endpoint | Protocol | Purpose | Used by |
---|---|---|---|---|
Outbound | https://{hub-api-server-url}:{port} | TCP | Kubernetes API server of the hub cluster | OCM agents, including the add-on agents, running on the managed clusters |
To use a proxy, please make sure the proxy server is well configured to allow the above connections and the proxy server is reachable for the managed clusters. See Register a cluster to hub through proxy server for more details.
Install clusteradm CLI tool
It’s recommended to run the following command to download and install the
latest release of the clusteradm
command-line tool:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
You can also install the latest development version (main branch) by running:
# Installing clusteradm to $GOPATH/bin/
GO111MODULE=off go get -u open-cluster-management.io/clusteradm/...
Bootstrap a klusterlet
Before actually installing the OCM components into your clusters, export
the following environment variables in your terminal before running our
command-line tool clusteradm
so that it can correctly discriminate the managed cluster:
# The context name of the clusters in your kubeconfig
export CTX_HUB_CLUSTER=<your hub cluster context>
export CTX_MANAGED_CLUSTER=<your managed cluster context>
Copy the previously generated command – clusteradm join
, and add the arguments respectively based
on the different distribution.
NOTE: If there is no configmap kube-root-ca.crt
in kube-public namespace of the hub cluster,
the flag –ca-file should be set to provide a valid hub ca file to help set
up the external client.
# NOTE: For KinD clusters use the parameter: --force-internal-endpoint-lookup
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--force-internal-endpoint-lookup \
--context ${CTX_MANAGED_CLUSTER}
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--context ${CTX_MANAGED_CLUSTER}
Bootstrap a klusterlet in hosted mode(Optional)
Using the above command, the klusterlet components(registration-agent and work-agent) will be deployed on the managed cluster, it is mandatory to expose the hub cluster to the managed cluster. We provide an option for running the klusterlet components outside the managed cluster, for example, on the hub cluster(hosted mode).
The hosted mode deploying is till in experimental stage, consider to use it only when:
- want to reduce the footprints of the managed cluster.
- do not want to expose the hub cluster to the managed cluster directly
In hosted mode, the cluster where the klusterlet is running is called the hosting cluster. Running the following command to the hosting cluster to register the managed cluster to the hub.
# NOTE for KinD clusters:
# 1. hub is KinD, use the parameter: --force-internal-endpoint-lookup
# 2. managed is Kind, --managed-cluster-kubeconfig should be internal: `kind get kubeconfig --name managed --internal`
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--mode hosted \
--managed-cluster-kubeconfig <your managed cluster kubeconfig> \ # Should be an internal kubeconfig
--force-internal-endpoint-lookup \
--context <your hosting cluster context>
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--mode hosted \
--managed-cluster-kubeconfig <your managed cluster kubeconfig> \
--context <your hosting cluster context>
Bootstrap a klusterlet in singleton mode
To reduce the footprint of agent in the managed cluster, singleton mode is introduced since v0.12.0
.
In the singleton mode, the work and registration agent will be run as a single pod in the managed
cluster.
Note: to run klusterlet in singleton mode, you must have a clusteradm version equal or higher than
v0.12.0
# NOTE: For KinD clusters use the parameter: --force-internal-endpoint-lookup
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--singleton \
--force-internal-endpoint-lookup \
--context ${CTX_MANAGED_CLUSTER}
clusteradm join \
--hub-token <your token data> \
--hub-apiserver <your hub cluster endpoint> \
--wait \
--cluster-name "cluster1" \ # Or other arbitrary unique name
--singleton \
--context ${CTX_MANAGED_CLUSTER}
Accept the join request and verify
After the OCM agent is running on your managed cluster, it will be sending a “handshake” to your hub cluster and waiting for an approval from the hub cluster admin. In this section, we will walk through accepting the registration requests from the perspective of an OCM’s hub admin.
Wait for the creation of the CSR object which will be created by your managed clusters’ OCM agents on the hub cluster:
kubectl get csr -w --context ${CTX_HUB_CLUSTER} | grep cluster1 # or the previously chosen cluster name
An example of a pending CSR request is shown below:
cluster1-tqcjj 33s kubernetes.io/kube-apiserver-client system:serviceaccount:open-cluster-management:cluster-bootstrap Pending
Accept the join request using the
clusteradm
tool:clusteradm accept --clusters cluster1 --context ${CTX_HUB_CLUSTER}
After running the
accept
command, the CSR from your managed cluster named “cluster1” will be approved. Additionally, it will instruct the OCM hub control plane to setup related objects (such as a namespace named “cluster1” in the hub cluster) and RBAC permissions automatically.Verify the installation of the OCM agents on your managed cluster by running:
kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER} NAME READY STATUS RESTARTS AGE klusterlet-registration-agent-598fd79988-jxx7n 1/1 Running 0 19d klusterlet-work-agent-7d47f4b5c5-dnkqw 1/1 Running 0 19d
Verify that the
cluster1
ManagedCluster
object was created successfully by running:kubectl get managedcluster --context ${CTX_HUB_CLUSTER}
Then you should get a result that resembles the following:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE cluster1 true <your endpoint> True True 5m23s
If the managed cluster status is not true, refer to Troubleshooting to debug on your cluster.
Apply a Manifestwork
After the managed cluster is registered, test that you can deploy a pod to the managed cluster from the hub cluster. Create a manifest-work.yaml
as shown in this example:
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
name: mw-01
namespace: ${MANAGED_CLUSTER_NAME}
spec:
workload:
manifests:
- apiVersion: v1
kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: busybox
command: ["sh", "-c", 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
Apply the yaml file to the hub cluster.
kubectl apply -f manifest-work.yaml --context ${CTX_HUB_CLUSTER}
Verify that the manifestwork
resource was applied to the hub.
kubectl -n ${MANAGED_CLUSTER_NAME} get manifestwork/mw-01 --context ${CTX_HUB_CLUSTER} -o yaml
Check on the managed cluster and see the hello Pod has been deployed from the hub cluster.
$ kubectl -n default get pod --context ${CTX_MANAGED_CLUSTER}
NAME READY STATUS RESTARTS AGE
hello 1/1 Running 0 108s
Troubleshooting
If the managed cluster status is not true.
For example, the result below is shown when checking managedcluster.
$ kubectl get managedcluster --context ${CTX_HUB_CLUSTER} NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE ${MANAGED_CLUSTER_NAME} true https://localhost Unknown 46m
There are many reasons for this problem. You can use the commands below to get more debug info. If the provided info doesn’t help, please log an issue to us.
On the hub cluster, check the managedcluster status.
kubectl get managedcluster ${MANAGED_CLUSTER_NAME} --context ${CTX_HUB_CLUSTER} -o yaml
On the hub cluster, check the lease status.
kubectl get lease -n ${MANAGED_CLUSTER_NAME} --context ${CTX_HUB_CLUSTER}
On the managed cluster, check the klusterlet status.
kubectl get klusterlet -o yaml --context ${CTX_MANAGED_CLUSTER}
Detach the cluster from hub
Remove the resources generated when registering with the hub cluster.
clusteradm unjoin --cluster-name "cluster1" --context ${CTX_MANAGED_CLUSTER}
Check the installation of the OCM agent is removed from the managed cluster.
kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
No resources found in open-cluster-management-agent namespace.
Check the klusterlet is removed from the managed cluster.
kubectl get klusterlet --context ${CTX_MANAGED_CLUSTER}
error: the server doesn't have a resource type "klusterlet
3 - Add-ons and Integrations
Enhance the open-cluster-management core control plane with optional add-ons and integrations.
3.1 - Application lifecycle management
After the cluster manager is installed, you could install the application management components to the hub cluster.
Architecture
For more details, visit the multicloud-operators-subscription GitHub page.
Prerequisite
You must meet the following prerequisites to install the application lifecycle management add-on:
Ensure the
open-cluster-management
cluster manager is installed. See Start the control plane for more information.Ensure the
open-cluster-management
klusterlet is installed. See Register a cluster for more information.
Install via Clusteradm CLI tool
Ensure clusteradm
CLI tool is installed. Download and extract the clusteradm binary. For more details see the clusteradm GitHub page.
$ clusteradm
Usage:
clusteradm [command]
...
Deploy the subscription operators to the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ clusteradm install hub-addon --names application-manager
Installing built-in application-manager add-on to the Hub cluster...
$ kubectl -n open-cluster-management get deploy multicluster-operators-subscription --context ${CTX_HUB_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
multicluster-operators-subscription 1/1 1 1 25s
Create the open-cluster-management-agent-addon
namespace on the managed cluster.
$ kubectl create ns open-cluster-management-agent-addon --context ${CTX_MANAGED_CLUSTER}
namespace/open-cluster-management-agent-addon created
Deploy the subscription add-on in corresponding managed cluster namespace on the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ clusteradm addon enable --names application-manager --clusters ${MANAGED_CLUSTER_NAME}
Deploying application-manager add-on to managed cluster: <managed_cluster_name>.
$ kubectl -n ${MANAGED_CLUSTER_NAME} get managedclusteraddon # kubectl -n cluster1 get managedclusteraddon
NAME AVAILABLE DEGRADED PROGRESSING
application-manager True
Check the the subscription add-on deployment on the managed cluster.
$ kubectl -n open-cluster-management-agent-addon get deploy --context ${CTX_MANAGED_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
application-manager 1/1 1 1 103s
Install from source
Clone the multicloud-operators-subscription
repository.
git clone https://github.com/open-cluster-management-io/multicloud-operators-subscription
cd multicloud-operators-subscription
Deploy the subscription operators to the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ make deploy-hub
$ kubectl -n open-cluster-management get deploy multicluster-operators-subscription --context ${CTX_HUB_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
multicluster-operators-subscription 1/1 1 1 25s
Create the open-cluster-management-agent-addon
namespace on the managed cluster and it’s optional if clusteradm
is used which create the ns during join
action.
$ kubectl create ns open-cluster-management-agent-addon --context ${CTX_MANAGED_CLUSTER}
namespace/open-cluster-management-agent-addon created
Deploy the subscription add-on in corresponding managed cluster namespace on the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ make deploy-addon
$ kubectl -n ${MANAGED_CLUSTER_NAME} get managedclusteraddon # kubectl -n cluster1 get managedclusteraddon
NAME AVAILABLE DEGRADED PROGRESSING
application-manager True
Check the the subscription add-on deployment on the managed cluster.
$ kubectl -n open-cluster-management-agent-addon get deploy --context ${CTX_MANAGED_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
application-manager 1/1 1 1 103s
What is next
After a successful deployment, test the subscription operator with a helm
subscription. Run the following command where the examples/helmrepo-hub-channel locates at here:
kubectl apply -f examples/helmrepo-hub-channel --context ${CTX_HUB_CLUSTER}
After a while, you should see the subscription is propagated to the managed cluster and the Helm app is installed. By default, when a subscribed applications is deployed to the target clusters, the applications are installed in the coresponding subscription namespace. To confirm, run the following command:
$ kubectl get subscriptions.apps --context ${CTX_MANAGED_CLUSTER}
NAME STATUS AGE LOCAL PLACEMENT TIME WINDOW
nginx-sub Subscribed 107m true
$ kubectl get pod --context ${CTX_MANAGED_CLUSTER}
NAME READY STATUS RESTARTS AGE
nginx-ingress-47f79-controller-6f495bb5f9-lpv7z 1/1 Running 0 108m
nginx-ingress-47f79-default-backend-7559599b64-rhwgm 1/1 Running 0 108m
Try this out
Let VScode Extension help you out!
Create a Bootstrap Project specifically tailored to your channel type, with all the Custom Resource (CR) templates you will need already auto-generated to get you started!
3.2 - Cluster proxy
Cluster proxy is an OCM addon providing L4 network connectivity from hub cluster to the managed clusters without any additional requirement to the managed cluster’s network infrastructure by leveraging the Kubernetes official SIG sub-project apiserver-network-proxy.
Background
The original architecture of OCM allows a cluster from anywhere to be registered and managed by OCM’s control plane (i.e. the hub cluster) as long as a klusterlet agent can reach hub cluster’s endpoint. So the minimal requirement for the managed cluster’s network infrastructure in OCM is “klusterlet -> hub” connectivity. However, there are still some cases where the components in the hub cluster hope to proactively dail/request the services in the managed clusters which will need the “hub -> klusterlet” connectivity on the other hand. In addition to that, the cases can be even more complex when each of the managed clusters are not in the same network.
Cluster proxy is aiming at seamlessly delivering the outbound L4 requests to the services in the managed cluster’s network without any assumptions upon the infrastructure as long as the clusters are successfully registered. Basically the connectivity provided by cluster proxy is working over the secured reserve proxy tunnels established by the apiserver-network-proxy.
About apiserver-network-proxy
Apiserver-network-proxy is the underlying technique of a Kubernetes' feature called konnectivity egress-selector which is majorly for setting up a TCP-level proxy for kube-apiserver to get access to the node/cluster network. Here are a few terms we need to clarify before we elaborate on how the cluster proxy resolve multi-cluster control plane network connectivity for us:
- Proxy Tunnel: A Grpc long connection that multiplexes and transmits TCP-level traffic from the proxy servers to the proxy agents. Note that there will be only one tunnel instance between each pair of server and agent.
- Proxy Server: An mTLS Grpc server opened for establishing tunnels which is the traffic ingress of proxy tunnel.
- Proxy Agent: A mTLS Grpc agent that maintains the tunnel between the server and is also the egress of the proxy tunnel.
- Konnectivity Client: The SDK library for talking through the tunnel.
Applicable to any Golang client of which the
Dialer
is overridable. Note that for non-golang clients, the proxy server also supports HTTP-Connect based proxying as alternative.
Architecture
Cluster proxy runs inside OCM’s hub cluster as an addon manager which is developed based on the Addon-Framework. The addon manager of cluster proxy will be responsible for:
- Managing the installation of proxy servers in the hub cluster.
- Managing the installation of proxy agents in the managed cluster.
- Collecting healthiness and the other stats consistently in the hub cluster.
The following picture shows the overall architecture of cluster proxy:
Note that the green lines in the picture above is the active proxy tunnels between proxy servers and agents, and HA setup is natively supported by apiserver-network-proxy both for the servers and the agents. The orange dash line started by the konnectivity client is the path of how the traffic flows from the hub cluster to arbitrary managed clusters. Meanwhile the core components including registration and work will help us manage the lifecycle of all the components distributed in the multiple managed clusters, so the hub admin won’t need to directly operate the managed clusters to install or configure the proxy agents no more.
Prerequisite
You must meet the following prerequisites to install the cluster-proxy:
Ensure your
open-cluster-management
release is greater thanv0.5.0
.Ensure
kubectl
is installed.Ensure
helm
is installed.
Installation
To install the cluster proxy addon to the OCM control plane, run:
$ helm repo add ocm https://open-cluster-management.io/helm-charts
$ helm repo update
$ helm search repo ocm
NAME CHART VERSION APP VERSION DESCRIPTION
ocm/cluster-proxy v0.1.1 1.0.0 A Helm chart for Cluster-Proxy
...
Then run the following helm command to install the cluster-proxy addon:
$ helm install -n open-cluster-management-addon --create-namespace \
cluster-proxy ocm/cluster-proxy
Note: If you’re using a non-Kind cluster, for example, an Openshift cluster,
you need to configure the ManagedProxyConfiguration
by setting proxyServer.entrypointAddress
in the values.yaml
to the address of the proxy server.
To do this at install time, you can run the following command:
$ helm install -n open-cluster-management-addon --create-namespace \
cluster-proxy ocm/cluster-proxy \
--set "proxyServer.entrypointAddress=<address of the proxy server>"
After the installation, you can check the deployment status of the cluster-proxy addon by running the following command:
$ kubectl -n open-cluster-management-addon get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
cluster-proxy 3/3 3 3 24h
cluster-proxy-addon-manager 1/1 1 1 24h
...
Then the addon manager of cluster-proxy will be created into the hub cluster
in the form of a deployment named cluster-proxy-addon-manager
. As is also
shown above, the proxy servers will also be created as deployment resource
called cluster-proxy
.
By default, the addon manager will be automatically discovering the addition or removal the managed clusters and installs the proxy agents into them on the fly. To check out the healthiness status of the proxy agents, we can run:
$ kubectl get managedclusteraddon -A
NAMESPACE NAME AVAILABLE DEGRADED PROGRESSING
<cluster#1> cluster-proxy True
<cluster#2> cluster-proxy True
The proxy agent distributed in the managed cluster will be periodically renewing the lease lock of the addon instance.
Usage
Command-line tools
Using the clusteradm to check the status of the cluster-proxy addon:
$ clusteradm proxy health
CLUSTER NAME INSTALLED AVAILABLE PROBED HEALTH LATENCY
<cluster#1> True True True 67.595144ms
<cluster#2> True True True 85.418368ms
Example code
An example client in the cluster proxy repo shows us how to dynamically talk to the kube-apiserver of a managed cluster from the hub cluster by simply prescribing the name of the target cluster. Here’s also a TL;DR code snippet:
// 1. instantiate a dialing tunnel instance.
// NOTE: recommended to be a singleton in your golang program.
tunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
context.TODO(),
<your proxy server endpoint>,
grpc.WithTransportCredentials(grpccredentials.NewTLS(<your proxy server TLS config>)),
)
if err != nil {
panic(err)
}
...
// 2. Overriding the Dialer to tunnel. Dialer is a common abstraction
// in Golang SDK.
cfg.Dial = tunnel.DialContext
Another example will be cluster-gateway which is an aggregated apiserver optionally working over cluster-proxy for routing traffic to the managed clusters dynamically in HTTPs protocol.
Note that by default the client credential for konnectivity client will be persisted as secrets resources under the namespace where the addon-manager is running. With that being said, to mount the secret to the systems in the other namespaces, the users are expected to copy the secret on their own manually.
More insights
Troubleshooting
The installation of proxy servers and agents are prescribed by the custom resource called “managedproxyconfiguration”. We can check it out by the following commands:
$ kubectl get managedproxyconfiguration cluster-proxy -o yaml
apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata: ...
spec:
proxyAgent:
image: <expected image of the proxy agents>
replicas: <expected replicas of proxy agents>
proxyServer:
entrypoint:
loadBalancerService:
name: proxy-agent-entrypoint
type: LoadBalancerService # Or "Hostname" to set a fixed address
# for establishing proxy tunnels.
image: <expected image of the proxy servers>
inClusterServiceName: proxy-entrypoint
namespace: <target namespace to install proxy server>
replicas: <expected replicas of proxy servers>
authentication: # Customize authentication between proxy server/agent
status:
conditions: ...
Related materials
See the original design proposal for reference.
3.3 - Managed service account
Managed Service Account is an OCM addon enabling a hub cluster admin to manage service account across multiple clusters on ease. By controlling the creation and removal of the service account, the addon agent will project and rotate the corresponding token back to the hub cluster which is very useful for the Kube API client from the hub cluster to request against the managed clusters.
Background
Normally there are two major approaches for a Kube API client to authenticate and access a Kubernetes cluster:
- Valid X.509 certificate-key pair
- Service account bearer token
The service account token will be automatically persisted as a secret resource inside the hosting Kubernetes clusters upon creation, which is commonly used for the “in-cluster” client. However, in terms of OCM, the hub cluster is completely an external system to the managed clusters, so we will need a local agent in each managed cluster to reflect the tokens consistently to the hub cluster so that the Kube API client from hub cluster can “push” the requests directly against the managed cluster. By delegating the multi-cluster service account management to this addon, we can:
- Project the service account token from the managed clusters to the hub cluster with custom API audience.
- Rotate the service account tokens dynamically.
- Homogenize the client identities so that we can easily write a static RBAC policy that applies to multiple managed clusters.
Prerequisite
You must meet the following prerequisites to install the managed service account:
Ensure your
open-cluster-management
release is greater thanv0.5.0
.Ensure
kubectl
is installed.Ensure
helm
is installed.
Installation
To install the managed service account addon to the OCM control plane, run:
$ helm repo add ocm https://open-cluster-management.io/helm-charts
$ helm repo update
$ helm search repo ocm
NAME CHART VERSION APP VERSION DESCRIPTION
ocm/managed-serviceaccount <...> 1.0.0 A Helm chart for Managed ServiceAccount Addon
...
Then run the following helm command to continue the installation:
$ helm install -n open-cluster-management-addon --create-namespace \
managed-serviceaccount ocm/managed-serviceaccount
$ kubectl -n open-cluster-management-addon get pod
NAME READY STATUS RESTARTS AGE
managed-serviceaccount-addon-manager-5m9c95b7d8-xsb94 1/1 Running 1 4d4h
...
By default, the addon manager will be automatically discovering the addition or removal the managed clusters and installs the managed serviceaccount agents into them on the fly. To check out the healthiness status of the managed serviceaccount agents, we can run:
$ kubectl get managedclusteraddon -A
NAMESPACE NAME AVAILABLE DEGRADED PROGRESSING
<cluster name> managed-serviceaccount True
Usage
To exercise the new ManagedServiceAccount
API introduced by this addon, we
can start by applying the following sample resource:
$ export CLUSTER_NAME=<cluster name>
$ kubectl create -f - <<EOF
apiVersion: authentication.open-cluster-management.io/v1alpha1
kind: ManagedServiceAccount
metadata:
name: my-sample
namespace: ${CLUSTER_NAME}
spec:
rotation: {}
EOF
Then the addon agent in each of the managed cluster is responsible for
executing and refreshing the status of the ManagedServiceAccount
, e.g.:
$ kubectl describe ManagedServiceAccount -n cluster1
...
status:
conditions:
- lastTransitionTime: "2021-12-09T09:08:15Z"
message: ""
reason: TokenReported
status: "True"
type: TokenReported
- lastTransitionTime: "2021-12-09T09:08:15Z"
message: ""
reason: SecretCreated
status: "True"
type: SecretCreated
expirationTimestamp: "2022-12-04T09:08:15Z"
tokenSecretRef:
lastRefreshTimestamp: "2021-12-09T09:08:15Z"
name: my-sample
The service account will be created in the managed cluster (assume the name is cluster1
):
$ kubectl get sa my-sample -n open-cluster-management-managed-serviceaccount --context kind-cluster1
NAME SECRETS AGE
my-sample 1 9m57s
The corresponding secret will also be created in the hub cluster, which is visible via:
$ kubectl -n <your cluster> get secret my-sample
NAME TYPE DATA AGE
my-sample Opaque 2 2m23s
Related materials
Repo: https://github.com/open-cluster-management-io/managed-serviceaccount
See the design proposal at: https://github.com/open-cluster-management-io/enhancements/tree/main/enhancements/sig-architecture/19-projected-serviceaccount-token
3.4 - Policy controllers
The Policy API on the hub delivers the policies defined in spec.policy-templates
to the managed
clusters via the policy framework controllers. Once on the managed
cluster, these Policy Templates are acted upon by the associated controller on the managed cluster. The policy
framework supports delivering the Policy Template kinds listed.
Configuration policy
The ConfigurationPolicy
is provided by OCM and defines Kubernetes manifests to compare with objects that currently
exist on the cluster. The action that the ConfigurationPolicy
will take is determined by its complianceType
.
Compliance types include musthave
, mustnothave
, and mustonlyhave
. musthave
means the object should have the
listed keys and values as a subset of the larger object. mustnothave
means an object matching the listed keys and
values should not exist. mustonlyhave
ensures objects only exist with the keys and values exactly as defined.
Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based
policies that are run with the Open Policy Agent (OPA). Gatekeeper ConstraintTemplates
and constraints can be
provided in an OCM Policy
to sync to managed clusters that have Gatekeeper installed on them.
3.4.1 - Configuration Policy
The ConfigurationPolicy
defines Kubernetes manifests to compare with objects that currently exist on the cluster. The
Configuration policy controller is provided by Open Cluster Management and runs on managed clusters.
Prerequisites
You must meet the following prerequisites to install the configuration policy controller:
Ensure Golang is installed, if you are planning to install from the source.
Ensure the
open-cluster-management
policy framework is installed. See Policy Framework for more information.
Installing the configuration policy controller
Deploy via Clusteradm CLI
Ensure clusteradm
CLI is installed and is newer than v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
Deploy the configuration policy controller to the managed clusters (this command is the same for a self-managed hub):
# Deploy the configuration policy controller clusteradm addon enable addon --names config-policy-controller --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
Ensure the pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon NAME READY STATUS RESTARTS AGE config-policy-controller-7f8fb64d8c-pmfx4 1/1 Running 0 44s
Sample configuration policy
After a successful deployment, test the policy framework and configuration policy controller with a sample policy.
For more information on how to use a ConfigurationPolicy
, read the
Policy
API concept section.
Run the following command to create a policy on the hub that uses
Placement
:# Configure kubectl to point to the hub cluster kubectl config use-context ${CTX_HUB_CLUSTER} # Apply the example policy and placement kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/community/CM-Configuration-Management/policy-pod-placement.yaml
Update the
Placement
to distribute the policy to the managed cluster with the following command (thisclusterSelector
will deploy the policy to all managed clusters):kubectl patch -n default placement.cluster.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"predicates\":[{\"requiredClusterSelector\":{\"labelSelector\":{\"matchExpressions\":[]}}}]}}"
Make sure the
default
namespace has aManagedClusterSetBinding
for aManagedClusterSet
with at least one managed cluster resource in theManagedClusterSet
. See Bind ManagedClusterSet to a namespace for more information on this.To confirm that the managed cluster is selected by the
Placement
, run the following command:$ kubectl get -n default placementdecision.cluster.open-cluster-management.io/placement-policy-pod-decision-1 -o yaml ... status: decisions: - clusterName: <managed cluster name> reason: "" ...
Enforce the policy to make the configuration policy automatically correct any misconfigurations on the managed cluster:
$ kubectl patch -n default policy.policy.open-cluster-management.io/policy-pod --type=merge -p "{\"spec\":{\"remediationAction\": \"enforce\"}}" policy.policy.open-cluster-management.io/policy-pod patched
After a few seconds, your policy is propagated to the managed cluster. To confirm, run the following command:
$ kubectl config use-context ${CTX_MANAGED_CLUSTER} $ kubectl get policy -A NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE cluster1 default.policy-pod enforce Compliant 4m32s
The missing pod is created by the policy on the managed cluster. To confirm, run the following command on the managed cluster:
$ kubectl get pod -n default NAME READY STATUS RESTARTS AGE sample-nginx-pod 1/1 Running 0 23s
3.4.2 - Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper constraints can be used to evaluate Kubernetes resource compliance. You can leverage OPA as the policy engine, and use Rego as the policy language.
Installing Gatekeeper
See the Gatekeeper documentation to install the desired version of Gatekeeper to the managed cluster.
Sample Gatekeeper policy
Gatekeeper policies are written using constraint templates and constraints. View the following YAML examples that use
Gatekeeper constraints in an OCM Policy
:
ConstraintTemplates
and constraints: Use the Gatekeeper integration feature by using OCM policies for multicluster distribution of Gatekeeper constraints and Gatekeeper audit results aggregation on the hub cluster. The following example defines a GatekeeperConstraintTemplate
and constraint (K8sRequiredLabels
) to ensure the “gatekeeper” label is set on all namespaces:apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: require-gatekeeper-labels-on-ns spec: remediationAction: inform # (1) disabled: false policy-templates: - objectDefinition: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8srequiredlabels spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{"msg": msg, "details": {"missing_labels": missing}}] { provided := {label | input.review.object.metadata.labels[label]} required := {label | label := input.parameters.labels[_]} missing := required - provided count(missing) > 0 msg := sprintf("you must provide labels: %v", [missing]) } - objectDefinition: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-gk spec: enforcementAction: dryrun match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: ["gatekeeper"]
- Since the remediationAction is set to “inform”, the
enforcementAction
field of the Gatekeeper constraint is overridden to “warn”. This means that Gatekeeper detects and warns you about creating or updating a namespace that is missing the “gatekeeper” label. If the policyremediationAction
is set to “enforce”, the Gatekeeper constraintenforcementAction
field is overridden to “deny”. In this context, this configuration prevents any user from creating or updating a namespace that is missing the gatekeeper label.
With the previous policy, you might receive the following policy status message:
warn - you must provide labels: {“gatekeeper”} (on Namespace default); warn - you must provide labels: {“gatekeeper”} (on Namespace gatekeeper-system).
Once a policy containing Gatekeeper constraints or
ConstraintTemplates
is deleted, the constraints andConstraintTemplates
are also deleted from the managed cluster.Notes:
- The Gatekeeper audit functionality runs every minute by default. Audit results are sent back to the hub cluster to be viewed in the OCM policy status of the managed cluster.
- Since the remediationAction is set to “inform”, the
Auditing Gatekeeper events: The following example uses an OCM configuration policy within an OCM policy to check for Kubernetes API requests denied by the Gatekeeper admission webhook:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: low object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Event metadata: namespace: gatekeeper-system # set it to the actual namespace where gatekeeper is running if different annotations: constraint_action: deny constraint_kind: K8sRequiredLabels constraint_name: ns-must-have-gk event_type: violation
3.5 - Policy framework
The policy framework provides governance capabilities to OCM managed Kubernetes clusters. Policies provide visibility and drive remediation for various security and configuration aspects to help IT administrators meet their requirements.
API Concepts
View the Policy API page for additional details about the Policy API managed by the Policy Framework components, including:
Architecture
The governance policy framework distributes policies to managed clusters and collects results to send back to the hub cluster.
Prerequisite
You must meet the following prerequisites to install the policy framework:
Ensure the
open-cluster-management
cluster manager is installed. See Start the control plane for more information.Ensure the
open-cluster-management
klusterlet is installed. See Register a cluster for more information.If you are using
PlacementRules
with your policies, ensure theopen-cluster-management
application is installed . See Application management for more information. If you are using the defaultPlacement
API, you can skip the Application management installation, but you do need to install thePlacementRule
CRD with this command:kubectl apply -f https://raw.githubusercontent.com/open-cluster-management-io/multicloud-operators-subscription/main/deploy/hub-common/apps.open-cluster-management.io_placementrules_crd.yaml
Install the governance-policy-framework hub components
Install via Clusteradm CLI
Ensure clusteradm
CLI is installed and is at least v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
Deploy the policy framework controllers to the hub cluster:
# The context name of the clusters in your kubeconfig # If the clusters are created by KinD, then the context name will the follow the pattern "kind-<cluster name>". export CTX_HUB_CLUSTER=<your hub cluster context> # export CTX_HUB_CLUSTER=kind-hub export CTX_MANAGED_CLUSTER=<your managed cluster context> # export CTX_MANAGED_CLUSTER=kind-cluster1 # Set the deployment namespace export HUB_NAMESPACE="open-cluster-management" # Deploy the policy framework hub controllers clusteradm install hub-addon --names governance-policy-framework --context ${CTX_HUB_CLUSTER}
Ensure the pods are running on the hub with the following command:
$ kubectl get pods -n ${HUB_NAMESPACE} NAME READY STATUS RESTARTS AGE governance-policy-addon-controller-bc78cbcb4-529c2 1/1 Running 0 94s governance-policy-propagator-8c77f7f5f-kthvh 1/1 Running 0 94s
- See more about the governance-policy-framework components:
Deploy the synchronization components to the managed cluster(s)
Deploy via Clusteradm CLI
To deploy the synchronization components to a self-managed hub cluster:
clusteradm addon enable --names governance-policy-framework --clusters <managed_hub_cluster_name> --annotate addon.open-cluster-management.io/on-multicluster-hub=true --context ${CTX_HUB_CLUSTER}
To deploy the synchronization components to a managed cluster:
clusteradm addon enable --names governance-policy-framework --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
Verify that the governance-policy-framework-addon controller pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon NAME READY STATUS RESTARTS AGE governance-policy-framework-addon-57579b7c-652zj 1/1 Running 0 87s
What is next
Install the policy controllers to the managed clusters.
4 - Administration
A few general guide about operating the open-cluster-management’s control plane and the managed clusters.
4.1 - Monitoring OCM using Prometheus-Operator
In this page, we provide a way to monitor your OCM environment using Prometheus-Operator.
Before you get started
You must have a OCM environment setuped. You can also follow our recommended quick start guide to set up a playgroud OCM environment.
And then please install the Prometheus-Operator in your hub cluster. You can also run the following commands copied from the official doc:
git clone https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus
# Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
kubectl create -f manifests/setup
# Wait until the "servicemonitors" CRD is created. The message "No resources found" means success in this context.
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl create -f manifests/
Monitoring the control-plane resource usage.
You can use kubectl proxy
to open prometheus UI in your browser on localhost:9090:
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
The following queries are to monitor the control-plane pods’ cpu usage, memory usage and apirequestcount for critical CRs:
rate(container_cpu_usage_seconds_total{namespace=~"open-cluster-management.*"}[3m])
container_memory_working_set_bytes{namespace=~"open-cluster-management.*"}
rate(apiserver_request_total{resource=~"managedclusters|managedclusteraddons|managedclustersetbindings|managedclustersets|addonplacementscores|placementdecisions|placements|manifestworks|manifestworkreplicasets"}[1m])
Visualized with Grafana
We provide a intial grafana dashboard for you to visualize the metrics. But you can also customize your own dashboard.
First, use the following command to proxy grafana service:
kubectl --namespace monitoring port-forward svc/grafana 3000
Next, open the grafana UI in your browser on localhost:3000.
Click the “Import Dashboard” and run the following command to copy a sample dashboard and paste it to the grafana:
curl https://raw.githubusercontent.com/open-cluster-management-io/open-cluster-management-io.github.io/main/content/en/getting-started/administration/assets/grafana-sample.json | pbcopy
Then, you will get a sample grafana dashboard that you can fine-tune further:
4.2 - Upgrading your OCM environment
This page provides the suggested steps to upgrade your OCM environment including both the hub cluster and the managed clusters. Overall the major steps you should follow are:
- Read the release notes to confirm the latest OCM release version. (Note that some add-ons’ version might be different from OCM’s overall release version.)
- Upgrade your command line tools
clusteradm
Before you begin
You must have an existing OCM environment and there’s supposed to be registration-operator running in your clusters. The registration-operators is supposed to be installed if you’re previously following our recommended quick start guide to set up your OCM. The operator is responsible for helping you upgrade the other components on ease.
Upgrade command-line tool
In order to retrieve the latest version of OCM’s command-line tool clusteradm
,
run the following one-liner command:
$ curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
Then you’re supposed to see the following outputs:
Getting the latest clusteradm CLI...
Your system is darwin_amd64
clusteradm CLI is detected:
Reinstalling clusteradm CLI - /usr/local/bin/clusteradm...
Installing v0.1.0 OCM clusteradm CLI...
Downloading https://github.com/open-cluster-management-io/clusteradm/releases/download/v0.1.0/clusteradm_darwin_amd64.tar.gz ...
clusteradm installed into /usr/local/bin successfully.
To get started with clusteradm, please visit https://open-cluster-management.io/getting-started/
Also, your can confirm the installed cli version by running:
$ clusteradm version
client version :v0.1.0
server release version : ...
Upgrade OCM Components via Command-line tool
Hub Cluster
For example, to upgrade OCM components in the hub cluster, run the following command:
$ clusteradm upgrade clustermanager --bundle-version=0.7.0
Then clusteradm
will make sure everything in the hub cluster is upgraded to
the expected version. To check the latest status after the upgrade, continue to
run the following command:
$ clusteradm get hub-info
Managed Clusters
To upgrade the OCM components in the managed clusters, switch the client context
e.g. overriding KUBECONFIG
environment variable, then simply run the following
command:
$ clusteradm upgrade klusterlet --bundle-version=0.7.0
To check the status after the upgrade, continue running this command against the managed cluster:
$ clusteradm get klusterlet-info
Upgrade OCM Components via Manual Edit
Hub Cluster
Upgrading the registration-operator
Navigate into the namespace where you installed registration-operator (named “open-cluster-management” by default) and edit the image version of its deployment resource:
$ kubectl -n open-cluster-management edit deployment cluster-manager
Then update the image tag version to your target release version, which is exactly the OCM’s overall release version.
--- image: quay.io/open-cluster-management/registration-operator:<old release>
+++ image: quay.io/open-cluster-management/registration-operator:<new release>
Upgrading the core components
After the upgrading of registration-operator is done, it’s about time to surge
the working modules of OCM. Go on and edit the clustermanager
custom resource
to prescribe the registration-operator to perform the automated upgrading:
$ kubectl edit clustermanager cluster-manager
In the content of clustermanager
resource, you’re supposed to see a few
images listed in its spec:
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata: ...
spec:
registrationImagePullSpec: quay.io/open-cluster-management/registration:<target release>
workImagePullSpec: quay.io/open-cluster-management/work:<target release>
# NOTE: Placement release versioning differs from the OCM root version, please refer to the release note.
placementImagePullSpec: quay.io/open-cluster-management/placement:<target release>
Replacing the old release version to the latest and commit the changes will
trigger the process of background upgrading. Note that the status of upgrade
can be actively tracked via the status of clustermanager
, so if anything goes
wrong during the upgrade it should also be reflected in that status.
Managed Clusters
Upgrading the registration-operator
Similar to the process of upgrading hub’s registration-operator, the only difference you’re supposed to notice when upgrading the managed cluster is the name of deployment. Note that before running the following command, you are expected to switch the context to access the managed clusters not the hub.
$ kubectl -n open-cluster-management edit deployment klusterlet
Then repeatedly, update the image tag version to your target release version and commit the changes will upgrade the registration-operator.
Upgrading the agent components
After the registration-operator is upgraded, move on and edit the corresponding
klusterlet
custom resource to trigger the upgrading process in your managed
cluster:
$ kubectl edit klusterlet klusterlet
In the spec of klusterlet
, what is expected to be updated is also its image
list:
apiVersion: operator.open-cluster-management.io/v1
kind: Klusterlet
metadata: ...
spec:
...
registrationImagePullSpec: quay.io/open-cluster-management/registration:<target release>
workImagePullSpec: quay.io/open-cluster-management/work:<target release>
After committing the updates, actively checking the status of the klusterlet
to confirm whether everything is correctly upgraded. And repeat the above steps
to each of the managed clusters to perform a cluster-wise progressive upgrade.
Confirm the upgrade
Getting the overall status of the managed cluster will help you to detect the availability in case any of the managed clusters are running into failure:
$ kubectl get managedclusters
And the upgrading is all set if all the steps above is succeeded.