Enhance the open-cluster-management core control plane with optional add-ons and integrations.
This is the multi-page printable view of this section. Click here to print.
Add-ons and Integrations
- 1: Application lifecycle management
- 2: Cluster proxy
- 3: Managed service account
- 4: Policy controllers
- 5: Policy framework
1 - Application lifecycle management
After the cluster manager is installed, you could install the application management components to the hub cluster.
Architecture
For more details, visit the multicloud-operators-subscription GitHub page.
Prerequisite
You must meet the following prerequisites to install the application lifecycle management add-on:
Ensure the
open-cluster-management
cluster manager is installed. See Start the control plane for more information.Ensure the
open-cluster-management
klusterlet is installed. See Register a cluster for more information.
Install via Clusteradm CLI tool
Ensure clusteradm
CLI tool is installed. Download and extract the clusteradm binary. For more details see the clusteradm GitHub page.
$ clusteradm
Usage:
clusteradm [command]
...
Deploy the subscription operators to the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ clusteradm install hub-addon --names application-manager
Installing built-in application-manager add-on to the Hub cluster...
$ kubectl -n open-cluster-management get deploy multicluster-operators-subscription --context ${CTX_HUB_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
multicluster-operators-subscription 1/1 1 1 25s
Create the open-cluster-management-agent-addon
namespace on the managed cluster.
$ kubectl create ns open-cluster-management-agent-addon --context ${CTX_MANAGED_CLUSTER}
namespace/open-cluster-management-agent-addon created
Deploy the subscription add-on in corresponding managed cluster namespace on the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ clusteradm addon enable --names application-manager --clusters ${MANAGED_CLUSTER_NAME}
Deploying application-manager add-on to managed cluster: <managed_cluster_name>.
$ kubectl -n ${MANAGED_CLUSTER_NAME} get managedclusteraddon # kubectl -n cluster1 get managedclusteraddon
NAME AVAILABLE DEGRADED PROGRESSING
application-manager True
Check the the subscription add-on deployment on the managed cluster.
$ kubectl -n open-cluster-management-agent-addon get deploy --context ${CTX_MANAGED_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
application-manager 1/1 1 1 103s
Install from source
Clone the multicloud-operators-subscription
repository.
git clone https://github.com/open-cluster-management-io/multicloud-operators-subscription
cd multicloud-operators-subscription
Deploy the subscription operators to the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ make deploy-hub
$ kubectl -n open-cluster-management get deploy multicluster-operators-subscription --context ${CTX_HUB_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
multicluster-operators-subscription 1/1 1 1 25s
Create the open-cluster-management-agent-addon
namespace on the managed cluster and it’s optional if clusteradm
is used which create the ns during join
action.
$ kubectl create ns open-cluster-management-agent-addon --context ${CTX_MANAGED_CLUSTER}
namespace/open-cluster-management-agent-addon created
Deploy the subscription add-on in corresponding managed cluster namespace on the hub cluster.
$ kubectl config use-context ${CTX_HUB_CLUSTER}
$ make deploy-addon
$ kubectl -n ${MANAGED_CLUSTER_NAME} get managedclusteraddon # kubectl -n cluster1 get managedclusteraddon
NAME AVAILABLE DEGRADED PROGRESSING
application-manager True
Check the the subscription add-on deployment on the managed cluster.
$ kubectl -n open-cluster-management-agent-addon get deploy --context ${CTX_MANAGED_CLUSTER}
NAME READY UP-TO-DATE AVAILABLE AGE
application-manager 1/1 1 1 103s
What is next
After a successful deployment, test the subscription operator with a helm
subscription. Run the following command where the examples/helmrepo-hub-channel locates at here:
kubectl apply -f examples/helmrepo-hub-channel --context ${CTX_HUB_CLUSTER}
After a while, you should see the subscription is propagated to the managed cluster and the Helm app is installed. By default, when a subscribed applications is deployed to the target clusters, the applications are installed in the coresponding subscription namespace. To confirm, run the following command:
$ kubectl get subscriptions.apps --context ${CTX_MANAGED_CLUSTER}
NAME STATUS AGE LOCAL PLACEMENT TIME WINDOW
nginx-sub Subscribed 107m true
$ kubectl get pod --context ${CTX_MANAGED_CLUSTER}
NAME READY STATUS RESTARTS AGE
nginx-ingress-47f79-controller-6f495bb5f9-lpv7z 1/1 Running 0 108m
nginx-ingress-47f79-default-backend-7559599b64-rhwgm 1/1 Running 0 108m
Try this out
Let VScode Extension help you out!
Create a Bootstrap Project specifically tailored to your channel type, with all the Custom Resource (CR) templates you will need already auto-generated to get you started!
2 - Cluster proxy
Cluster proxy is an OCM addon providing L4 network connectivity from hub cluster to the managed clusters without any additional requirement to the managed cluster’s network infrastructure by leveraging the Kubernetes official SIG sub-project apiserver-network-proxy.
Background
The original architecture of OCM allows a cluster from anywhere to be registered and managed by OCM’s control plane (i.e. the hub cluster) as long as a klusterlet agent can reach hub cluster’s endpoint. So the minimal requirement for the managed cluster’s network infrastructure in OCM is “klusterlet -> hub” connectivity. However, there are still some cases where the components in the hub cluster hope to proactively dail/request the services in the managed clusters which will need the “hub -> klusterlet” connectivity on the other hand. In addition to that, the cases can be even more complex when each of the managed clusters are not in the same network.
Cluster proxy is aiming at seamlessly delivering the outbound L4 requests to the services in the managed cluster’s network without any assumptions upon the infrastructure as long as the clusters are successfully registered. Basically the connectivity provided by cluster proxy is working over the secured reserve proxy tunnels established by the apiserver-network-proxy.
About apiserver-network-proxy
Apiserver-network-proxy is the underlying technique of a Kubernetes' feature called konnectivity egress-selector which is majorly for setting up a TCP-level proxy for kube-apiserver to get access to the node/cluster network. Here are a few terms we need to clarify before we elaborate on how the cluster proxy resolve multi-cluster control plane network connectivity for us:
- Proxy Tunnel: A Grpc long connection that multiplexes and transmits TCP-level traffic from the proxy servers to the proxy agents. Note that there will be only one tunnel instance between each pair of server and agent.
- Proxy Server: An mTLS Grpc server opened for establishing tunnels which is the traffic ingress of proxy tunnel.
- Proxy Agent: A mTLS Grpc agent that maintains the tunnel between the server and is also the egress of the proxy tunnel.
- Konnectivity Client: The SDK library for talking through the tunnel.
Applicable to any Golang client of which the
Dialer
is overridable. Note that for non-golang clients, the proxy server also supports HTTP-Connect based proxying as alternative.
Architecture
Cluster proxy runs inside OCM’s hub cluster as an addon manager which is developed based on the Addon-Framework. The addon manager of cluster proxy will be responsible for:
- Managing the installation of proxy servers in the hub cluster.
- Managing the installation of proxy agents in the managed cluster.
- Collecting healthiness and the other stats consistently in the hub cluster.
The following picture shows the overall architecture of cluster proxy:
Note that the green lines in the picture above is the active proxy tunnels between proxy servers and agents, and HA setup is natively supported by apiserver-network-proxy both for the servers and the agents. The orange dash line started by the konnectivity client is the path of how the traffic flows from the hub cluster to arbitrary managed clusters. Meanwhile the core components including registration and work will help us manage the lifecycle of all the components distributed in the multiple managed clusters, so the hub admin won’t need to directly operate the managed clusters to install or configure the proxy agents no more.
Prerequisite
You must meet the following prerequisites to install the cluster-proxy:
Ensure your
open-cluster-management
release is greater thanv0.5.0
.Ensure
kubectl
is installed.Ensure
helm
is installed.
Installation
To install the cluster proxy addon to the OCM control plane, run:
$ helm repo add ocm https://open-cluster-management.io/helm-charts
$ helm repo update
$ helm search repo ocm
NAME CHART VERSION APP VERSION DESCRIPTION
ocm/cluster-proxy v0.1.1 1.0.0 A Helm chart for Cluster-Proxy
...
Then run the following helm command to install the cluster-proxy addon:
$ helm install -n open-cluster-management-addon --create-namespace \
cluster-proxy ocm/cluster-proxy
$ kubectl -n open-cluster-management-addon get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
cluster-proxy 3/3 3 3 24h
cluster-proxy-addon-manager 1/1 1 1 24h
...
Then the addon manager of cluster-proxy will be created into the hub cluster
in the form of a deployment named cluster-proxy-addon-manager
. As is also
shown above, the proxy servers will also be created as deployment resource
called cluster-proxy
.
By default, the addon manager will be automatically discovering the addition or removal the managed clusters and installs the proxy agents into them on the fly. To check out the healthiness status of the proxy agents, we can run:
$ kubectl get managedclusteraddon -A
NAMESPACE NAME AVAILABLE DEGRADED PROGRESSING
<cluster#1> cluster-proxy True
<cluster#2> cluster-proxy True
The proxy agent distributed in the managed cluster will be periodically renewing the lease lock of the addon instance.
Usage
Command-line tools
Using the clusteradm to check the status of the cluster-proxy addon:
$ clusteradm proxy health
CLUSTER NAME INSTALLED AVAILABLE PROBED HEALTH LATENCY
<cluster#1> True True True 67.595144ms
<cluster#2> True True True 85.418368ms
Example code
An example client in the cluster proxy repo shows us how to dynamically talk to the kube-apiserver of a managed cluster from the hub cluster by simply prescribing the name of the target cluster. Here’s also a TL;DR code snippet:
// 1. instantiate a dialing tunnel instance.
// NOTE: recommended to be a singleton in your golang program.
tunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
context.TODO(),
<your proxy server endpoint>,
grpc.WithTransportCredentials(grpccredentials.NewTLS(<your proxy server TLS config>)),
)
if err != nil {
panic(err)
}
...
// 2. Overriding the Dialer to tunnel. Dialer is a common abstraction
// in Golang SDK.
cfg.Dial = tunnel.DialContext
Another example will be cluster-gateway which is an aggregated apiserver optionally working over cluster-proxy for routing traffic to the managed clusters dynamically in HTTPs protocol.
Note that by default the client credential for konnectivity client will be persisted as secrets resources under the namespace where the addon-manager is running. With that being said, to mount the secret to the systems in the other namespaces, the users are expected to copy the secret on their own manually.
More insights
Troubleshooting
The installation of proxy servers and agents are prescribed by the custom resource called “managedproxyconfiguration”. We can check it out by the following commands:
$ kubectl get managedproxyconfiguration cluster-proxy -o yaml
apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata: ...
spec:
proxyAgent:
image: <expected image of the proxy agents>
replicas: <expected replicas of proxy agents>
proxyServer:
entrypoint:
loadBalancerService:
name: proxy-agent-entrypoint
type: LoadBalancerService # Or "Hostname" to set a fixed address
# for establishing proxy tunnels.
image: <expected image of the proxy servers>
inClusterServiceName: proxy-entrypoint
namespace: <target namespace to install proxy server>
replicas: <expected replicas of proxy servers>
authentication: # Customize authentication between proxy server/agent
status:
conditions: ...
Related materials
See the original design proposal for reference.
3 - Managed service account
Managed Service Account is an OCM addon enabling a hub cluster admin to manage service account across multiple clusters on ease. By controlling the creation and removal of the service account, the addon agent will project and rotate the corresponding token back to the hub cluster which is very useful for the Kube API client from the hub cluster to request against the managed clusters.
Background
Normally there are two major approaches for a Kube API client to authenticate and access a Kubernetes cluster:
- Valid X.509 certificate-key pair
- Service account bearer token
The service account token will be automatically persisted as a secret resource inside the hosting Kubernetes clusters upon creation, which is commonly used for the “in-cluster” client. However, in terms of OCM, the hub cluster is completely an external system to the managed clusters, so we will need a local agent in each managed cluster to reflect the tokens consistently to the hub cluster so that the Kube API client from hub cluster can “push” the requests directly against the managed cluster. By delegating the multi-cluster service account management to this addon, we can:
- Project the service account token from the managed clusters to the hub cluster with custom API audience.
- Rotate the service account tokens dynamically.
- Homogenize the client identities so that we can easily write a static RBAC policy that applies to multiple managed clusters.
Prerequisite
You must meet the following prerequisites to install the managed service account:
Ensure your
open-cluster-management
release is greater thanv0.5.0
.Ensure
kubectl
is installed.Ensure
helm
is installed.
Installation
To install the managed service account addon to the OCM control plane, run:
$ helm repo add ocm https://open-cluster-management.io/helm-charts
$ helm repo update
$ helm search repo ocm
NAME CHART VERSION APP VERSION DESCRIPTION
ocm/managed-serviceaccount <...> 1.0.0 A Helm chart for Managed ServiceAccount Addon
...
Then run the following helm command to continue the installation:
$ helm install -n open-cluster-management-addon --create-namespace \
managed-serviceaccount ocm/managed-serviceaccount
$ kubectl -n open-cluster-management-addon get pod
NAME READY STATUS RESTARTS AGE
managed-serviceaccount-addon-manager-5m9c95b7d8-xsb94 1/1 Running 1 4d4h
...
By default, the addon manager will be automatically discovering the addition or removal the managed clusters and installs the managed serviceaccount agents into them on the fly. To check out the healthiness status of the managed serviceaccount agents, we can run:
$ kubectl get managedclusteraddon -A
NAMESPACE NAME AVAILABLE DEGRADED PROGRESSING
<cluster name> managed-serviceaccount True
Usage
To exercise the new ManagedServiceAccount
API introduced by this addon, we
can start by applying the following sample resource:
$ export CLUSTER_NAME=<cluster name>
$ kubectl create -f - <<EOF
apiVersion: authentication.open-cluster-management.io/v1alpha1
kind: ManagedServiceAccount
metadata:
name: my-sample
namespace: ${CLUSTER_NAME}
spec:
rotation: {}
EOF
Then the addon agent in each of the managed cluster is responsible for
executing and refreshing the status of the ManagedServiceAccount
, e.g.:
$ kubectl describe ManagedServiceAccount -n cluster1
...
status:
conditions:
- lastTransitionTime: "2021-12-09T09:08:15Z"
message: ""
reason: TokenReported
status: "True"
type: TokenReported
- lastTransitionTime: "2021-12-09T09:08:15Z"
message: ""
reason: SecretCreated
status: "True"
type: SecretCreated
expirationTimestamp: "2022-12-04T09:08:15Z"
tokenSecretRef:
lastRefreshTimestamp: "2021-12-09T09:08:15Z"
name: my-sample
The service account will be created in the managed cluster (assume the name is cluster1
):
$ kubectl get sa my-sample -n open-cluster-management-managed-serviceaccount --context kind-cluster1
NAME SECRETS AGE
my-sample 1 9m57s
The corresponding secret will also be created in the hub cluster, which is visible via:
$ kubectl -n <your cluster> get secret my-sample
NAME TYPE DATA AGE
my-sample Opaque 2 2m23s
Related materials
Repo: https://github.com/open-cluster-management-io/managed-serviceaccount
See the design proposal at: https://github.com/open-cluster-management-io/enhancements/tree/main/enhancements/sig-architecture/19-projected-serviceaccount-token
4 - Policy controllers
The Policy API on the hub delivers the policies defined in spec.policy-templates
to the managed
clusters via the policy framework controllers. Once on the managed
cluster, these Policy Templates are acted upon by the associated controller on the managed cluster. The policy
framework supports delivering the Policy Template kinds listed.
Configuration policy
The ConfigurationPolicy
is provided by OCM and defines Kubernetes manifests to compare with objects that currently
exist on the cluster. The action that the ConfigurationPolicy
will take is determined by its complianceType
.
Compliance types include musthave
, mustnothave
, and mustonlyhave
. musthave
means the object should have the
listed keys and values as a subset of the larger object. mustnothave
means an object matching the listed keys and
values should not exist. mustonlyhave
ensures objects only exist with the keys and values exactly as defined.
Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based
policies that are run with the Open Policy Agent (OPA). Gatekeeper ConstraintTemplates
and constraints can be
provided in an OCM Policy
to sync to managed clusters that have Gatekeeper installed on them.
4.1 - Configuration Policy
The ConfigurationPolicy
defines Kubernetes manifests to compare with objects that currently exist on the cluster. The
Configuration policy controller is provided by Open Cluster Management and runs on managed clusters.
Prerequisites
You must meet the following prerequisites to install the configuration policy controller:
Ensure Golang is installed, if you are planning to install from the source.
Ensure the
open-cluster-management
policy framework is installed. See Policy Framework for more information.
Installing the configuration policy controller
Deploy via Clusteradm CLI
Ensure clusteradm
CLI is installed and is newer than v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
Deploy the configuration policy controller to the managed clusters (this command is the same for a self-managed hub):
# Deploy the configuration policy controller clusteradm addon enable addon --names config-policy-controller --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
Ensure the pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon NAME READY STATUS RESTARTS AGE config-policy-controller-7f8fb64d8c-pmfx4 1/1 Running 0 44s
Deploy from source
Deploy the
config-policy-controller
to the managed cluster with the following commands:# The context name of the clusters in your kubeconfig # If the clusters are created by KinD, then the context name will the follow the pattern "kind-<cluster name>". export CTX_HUB_CLUSTER=<your hub cluster context> # export CTX_HUB_CLUSTER=kind-hub export CTX_MANAGED_CLUSTER=<your managed cluster context> # export CTX_MANAGED_CLUSTER=kind-cluster1 # Configure kubectl to point to the managed cluster kubectl config use-context ${CTX_MANAGED_CLUSTER} # Create the namespace for the controller export MANAGED_NAMESPACE="open-cluster-management-agent-addon" kubectl create ns ${MANAGED_NAMESPACE} # Apply the CRD export COMPONENT="config-policy-controller" export GIT_PATH="https://raw.githubusercontent.com/open-cluster-management-io/${COMPONENT}/v0.12.0/deploy" kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_configurationpolicies.yaml kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_operatorpolicies.yaml # Set the managed cluster name export MANAGED_CLUSTER_NAME=<your managed cluster name> # export MANAGED_CLUSTER_NAME=cluster1 # Deploy the controller kubectl apply -f ${GIT_PATH}/operator.yaml -n ${MANAGED_NAMESPACE} kubectl set env deployment/${COMPONENT} -n ${MANAGED_NAMESPACE} --containers=${COMPONENT} WATCH_NAMESPACE=${MANAGED_CLUSTER_NAME}
- See config-policy-controller for more information.
Ensure the pod is running on the managed cluster with the following command:
$ kubectl get pods -n ${MANAGED_NAMESPACE} NAME READY STATUS RESTARTS AGE config-policy-controller-7f8fb64d8c-pmfx4 1/1 Running 0 44s
Sample configuration policy
After a successful deployment, test the policy framework and configuration policy controller with a sample policy. You
can use a policy that includes a Placement
mapping or if you installed Application management’s PlacementRule
support you can use either placement implementation. Perform the steps in the Placement API or the Placement Rule
API section based on which placement API you desire to use.
For more information on how to use a ConfigurationPolicy
, read the
Policy
API concept section.
Placement API
Run the following command to create a policy on the hub that uses
Placement
:# Configure kubectl to point to the hub cluster kubectl config use-context ${CTX_HUB_CLUSTER} # Apply the example policy and placement kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/community/CM-Configuration-Management/policy-pod-placement.yaml
Update the
Placement
to distribute the policy to the managed cluster with the following command (thisclusterSelector
will deploy the policy to all managed clusters):kubectl patch -n default placement.cluster.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"predicates\":[{\"requiredClusterSelector\":{\"labelSelector\":{\"matchExpressions\":[]}}}]}}"
Make sure the
default
namespace has aManagedClusterSetBinding
for aManagedClusterSet
with at least one managed cluster resource in theManagedClusterSet
. See Bind ManagedClusterSet to a namespace for more information on this.To confirm that the managed cluster is selected by the
Placement
, run the following command:$ kubectl get -n default placementdecision.cluster.open-cluster-management.io/placement-policy-pod-decision-1 -o yaml ... status: decisions: - clusterName: <managed cluster name> reason: "" ...
Placement Rule API
NOTE: Skip this section if you applied the Placement API policy manifests.
Run the following command to create a policy on the hub that uses
PlacementRule
:# Configure kubectl to point to the hub cluster kubectl config use-context ${CTX_HUB_CLUSTER} # Apply the example policy and placement rule kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/stable/CM-Configuration-Management/policy-pod.yaml
Update the
PlacementRule
to distribute the policy to the managed cluster with the following command (thisclusterSelector
will deploy the policy to all managed clusters):$ kubectl patch -n default placementrule.apps.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"clusterSelector\":{\"matchExpressions\":[]}}}" placementrule.apps.open-cluster-management.io/placement-policy-pod patched
To confirm that the managed cluster is selected by the
PlacementRule
, run the following command:$ kubectl get -n default placementrule.apps.open-cluster-management.io/placement-policy-pod -o yaml ... status: decisions: - clusterName: ${MANAGED_CLUSTER_NAME} clusterNamespace: ${MANAGED_CLUSTER_NAME} ...
Final steps to apply the policy
Perform the following steps to continue working with the policy to test the policy framework now that a placement method
has been selected between Placement
or PlacementRule
.
Enforce the policy to make the configuration policy automatically correct any misconfigurations on the managed cluster:
$ kubectl patch -n default policy.policy.open-cluster-management.io/policy-pod --type=merge -p "{\"spec\":{\"remediationAction\": \"enforce\"}}" policy.policy.open-cluster-management.io/policy-pod patched
After a few seconds, your policy is propagated to the managed cluster. To confirm, run the following command:
$ kubectl config use-context ${CTX_MANAGED_CLUSTER} $ kubectl get policy -A NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE cluster1 default.policy-pod enforce Compliant 4m32s
The missing pod is created by the policy on the managed cluster. To confirm, run the following command on the managed cluster:
$ kubectl get pod -n default NAME READY STATUS RESTARTS AGE sample-nginx-pod 1/1 Running 0 23s
4.2 - Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper constraints can be used to evaluate Kubernetes resource compliance. You can leverage OPA as the policy engine, and use Rego as the policy language.
Installing Gatekeeper
See the Gatekeeper documentation to install the desired version of Gatekeeper to the managed cluster.
Sample Gatekeeper policy
Gatekeeper policies are written using constraint templates and constraints. View the following YAML examples that use
Gatekeeper constraints in an OCM Policy
:
ConstraintTemplates
and constraints: Use the Gatekeeper integration feature by using OCM policies for multicluster distribution of Gatekeeper constraints and Gatekeeper audit results aggregation on the hub cluster. The following example defines a GatekeeperConstraintTemplate
and constraint (K8sRequiredLabels
) to ensure the “gatekeeper” label is set on all namespaces:apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: require-gatekeeper-labels-on-ns spec: remediationAction: inform # (1) disabled: false policy-templates: - objectDefinition: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8srequiredlabels spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{"msg": msg, "details": {"missing_labels": missing}}] { provided := {label | input.review.object.metadata.labels[label]} required := {label | label := input.parameters.labels[_]} missing := required - provided count(missing) > 0 msg := sprintf("you must provide labels: %v", [missing]) } - objectDefinition: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-gk spec: enforcementAction: dryrun match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: ["gatekeeper"]
- Since the remediationAction is set to “inform”, the
enforcementAction
field of the Gatekeeper constraint is overridden to “warn”. This means that Gatekeeper detects and warns you about creating or updating a namespace that is missing the “gatekeeper” label. If the policyremediationAction
is set to “enforce”, the Gatekeeper constraintenforcementAction
field is overridden to “deny”. In this context, this configuration prevents any user from creating or updating a namespace that is missing the gatekeeper label.
With the previous policy, you might receive the following policy status message:
warn - you must provide labels: {“gatekeeper”} (on Namespace default); warn - you must provide labels: {“gatekeeper”} (on Namespace gatekeeper-system).
Once a policy containing Gatekeeper constraints or
ConstraintTemplates
is deleted, the constraints andConstraintTemplates
are also deleted from the managed cluster.Notes:
- The Gatekeeper audit functionality runs every minute by default. Audit results are sent back to the hub cluster to be viewed in the OCM policy status of the managed cluster.
- Since the remediationAction is set to “inform”, the
Auditing Gatekeeper events: The following example uses an OCM configuration policy within an OCM policy to check for Kubernetes API requests denied by the Gatekeeper admission webhook:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: low object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Event metadata: namespace: gatekeeper-system # set it to the actual namespace where gatekeeper is running if different annotations: constraint_action: deny constraint_kind: K8sRequiredLabels constraint_name: ns-must-have-gk event_type: violation
5 - Policy framework
The policy framework provides governance capabilities to OCM managed Kubernetes clusters. Policies provide visibility and drive remediation for various security and configuration aspects to help IT administrators meet their requirements.
API Concepts
View the Policy API page for additional details about the Policy API managed by the Policy Framework components, including:
Architecture
The governance policy framework distributes policies to managed clusters and collects results to send back to the hub cluster.
Note that in OCM versions newer than 0.8.x, the following controllers were consolidated into the policy framework addon.
Prerequisite
You must meet the following prerequisites to install the policy framework:
Ensure Golang is installed, if you are planning to install from the source.
Ensure the
open-cluster-management
cluster manager is installed. See Start the control plane for more information.Ensure the
open-cluster-management
klusterlet is installed. See Register a cluster for more information.If you are using
PlacementRules
with your policies, ensure theopen-cluster-management
application is installed . See Application management for more information. If you are using the defaultPlacement
API, you can skip the Application management installation, but you do need to install thePlacementRule
CRD with this command:kubectl apply -f https://raw.githubusercontent.com/open-cluster-management-io/multicloud-operators-subscription/main/deploy/hub-common/apps.open-cluster-management.io_placementrules_crd.yaml
Install the governance-policy-framework hub components
Install via Clusteradm CLI
Ensure clusteradm
CLI is installed and is at least v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
Deploy the policy framework controllers to the hub cluster:
# The context name of the clusters in your kubeconfig # If the clusters are created by KinD, then the context name will the follow the pattern "kind-<cluster name>". export CTX_HUB_CLUSTER=<your hub cluster context> # export CTX_HUB_CLUSTER=kind-hub export CTX_MANAGED_CLUSTER=<your managed cluster context> # export CTX_MANAGED_CLUSTER=kind-cluster1 # Set the deployment namespace export HUB_NAMESPACE="open-cluster-management" # Deploy the policy framework hub controllers clusteradm install hub-addon --names governance-policy-framework --context ${CTX_HUB_CLUSTER}
Ensure the pods are running on the hub with the following command:
$ kubectl get pods -n ${HUB_NAMESPACE} NAME READY STATUS RESTARTS AGE governance-policy-addon-controller-bc78cbcb4-529c2 1/1 Running 0 94s governance-policy-propagator-8c77f7f5f-kthvh 1/1 Running 0 94s
- See more about the governance-policy-framework components:
Install from source
Deploy the policy Custom Resource Definitions (CRD) and policy propagator component to the
open-cluster-management
namespace on the hub cluster with the following commands:# Configure kubectl to point to the hub cluster kubectl config use-context ${CTX_HUB_CLUSTER} # Create the namespace export HUB_NAMESPACE="open-cluster-management" kubectl create ns ${HUB_NAMESPACE} # Set the hub cluster name export HUB_CLUSTER_NAME="hub" # Set the hub kubeconfig file export HUB_KUBECONFIG="hub-kubeconfig" # Apply the CRDs export GIT_PATH="https://raw.githubusercontent.com/open-cluster-management-io/governance-policy-propagator/v0.12.0/deploy" kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_policies.yaml kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_placementbindings.yaml kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_policyautomations.yaml kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_policysets.yaml # Deploy the policy-propagator kubectl apply -f ${GIT_PATH}/operator.yaml -n ${HUB_NAMESPACE}
The policy propagator manages a webhook that requires a certificate. You can either disable the webhook or deploy
cert-manager
alongside the webhook resources to ensure the policy propagator runs properly:Optional 1: Enable the webhook
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml kubectl apply -f ${GIT_PATH}/webhook.yaml -n ${HUB_NAMESPACE}
Optional 2: Disable the webhook with the
--enable-webhooks=false
argumentkubectl patch deployment governance-policy-propagator --type='json' -p='[ {"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-webhooks=false"} ]' -n ${HUB_NAMESPACE} kubectl patch deployment governance-policy-propagator --type='json' -p='[ {"op": "remove", "path": "/spec/template/spec/containers/0/volumeMounts/0"}, {"op": "remove", "path": "/spec/template/spec/volumes/0"} ]' -n ${HUB_NAMESPACE}
Ensure the pods are running on the hub with the following command:
$ kubectl get pods -n ${HUB_NAMESPACE} NAME READY STATUS RESTARTS AGE governance-policy-propagator-8c77f7f5f-kthvh 1/1 Running 0 94s
- See more about the governance-policy-framework components:
Deploy the synchronization components to the managed cluster(s)
Deploy via Clusteradm CLI
To deploy the synchronization components to a self-managed hub cluster:
clusteradm addon enable --names governance-policy-framework --clusters <managed_hub_cluster_name> --annotate addon.open-cluster-management.io/on-multicluster-hub=true --context ${CTX_HUB_CLUSTER}
To deploy the synchronization components to a managed cluster:
clusteradm addon enable --names governance-policy-framework --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
Verify that the governance-policy-framework-addon controller pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon NAME READY STATUS RESTARTS AGE governance-policy-framework-addon-57579b7c-652zj 1/1 Running 0 87s
NOTE: If you are using clusteradm v0.3.x or older, the pod will be called
governance-policy-framework
and have a container per synchronization component (2 on a self-managed Hub, or 3 on a managed cluster).
Deploy from source
Export the hub cluster
kubeconfig
with the following command:For
kind
cluster:kind get kubeconfig --name ${HUB_CLUSTER_NAME} --internal > ${HUB_KUBECONFIG}
For non-
kind
clusters:kubectl config view --context=${CTX_HUB_CLUSTER} --minify --flatten > ${HUB_KUBECONFIG}
Deploy the policy synchronization component to each managed cluster. Run the following commands:
NOTE: The
--disable-spec-sync
flag should be set totrue
in thegovernance-policy-framework-addon
container arguments when deploying the synchronization component to a hub that is managing itself.# Set whether or not this is being deployed on the Hub export DEPLOY_ON_HUB=false # Configure kubectl to point to the managed cluster kubectl config use-context ${CTX_MANAGED_CLUSTER} # Create the namespace for the synchronization components export MANAGED_NAMESPACE="open-cluster-management-agent-addon" kubectl create ns ${MANAGED_NAMESPACE} # Create the secret to authenticate with the hub kubectl -n ${MANAGED_NAMESPACE} create secret generic hub-kubeconfig --from-file=kubeconfig=${HUB_KUBECONFIG} # Apply the policy CRD export GIT_PATH="https://raw.githubusercontent.com/open-cluster-management-io" kubectl apply -f ${GIT_PATH}/governance-policy-propagator/v0.12.0/deploy/crds/policy.open-cluster-management.io_policies.yaml # Set the managed cluster name and create the namespace export MANAGED_CLUSTER_NAME=<your managed cluster name> # export MANAGED_CLUSTER_NAME=cluster1 kubectl create ns ${MANAGED_CLUSTER_NAME} # Deploy the synchronization component export COMPONENT="governance-policy-framework-addon" kubectl apply -f ${GIT_PATH}/${COMPONENT}/v0.12.0/deploy/operator.yaml -n ${MANAGED_NAMESPACE} kubectl patch deployment governance-policy-framework-addon -n ${MANAGED_NAMESPACE} \ -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"governance-policy-framework-addon\",\"args\":[\"--hub-cluster-configfile=/var/run/klusterlet/kubeconfig\", \"--cluster-namespace=${MANAGED_CLUSTER_NAME}\", \"--enable-lease=true\", \"--log-level=2\", \"--disable-spec-sync=${DEPLOY_ON_HUB}\"]}]}}}}"
Verify that the pods are running on the managed cluster with the following command:
$ kubectl get pods -n ${MANAGED_NAMESPACE} NAME READY STATUS RESTARTS AGE governance-policy-framework-addon-6474b6d898-tmkw6 1/1 Running 0 2m14s
What is next
Install the policy controllers to the managed clusters.