This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Policy controllers

The Policy API on the hub delivers the policies defined in spec.policy-templates to the managed clusters via the policy framework controllers. Once on the managed cluster, these Policy Templates are acted upon by the associated controller on the managed cluster. The policy framework supports delivering the Policy Template kinds listed.

Configuration policy

The ConfigurationPolicy is provided by OCM and defines Kubernetes manifests to compare with objects that currently exist on the cluster. The action that the ConfigurationPolicy will take is determined by its complianceType. Compliance types include musthave, mustnothave, and mustonlyhave. musthave means the object should have the listed keys and values as a subset of the larger object. mustnothave means an object matching the listed keys and values should not exist. mustonlyhave ensures objects only exist with the keys and values exactly as defined.

Open Policy Agent Gatekeeper

Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper ConstraintTemplates and constraints can be provided in an OCM Policy to sync to managed clusters that have Gatekeeper installed on them.

1 - Configuration Policy

The ConfigurationPolicy defines Kubernetes manifests to compare with objects that currently exist on the cluster. The Configuration policy controller is provided by Open Cluster Management and runs on managed clusters.

Prerequisites

You must meet the following prerequisites to install the configuration policy controller:

  • Ensure kubectl and kustomize are installed.

  • Ensure Golang is installed, if you are planning to install from the source.

  • Ensure the open-cluster-management policy framework is installed. See Policy Framework for more information.

Installing the configuration policy controller

Deploy via Clusteradm CLI

Ensure clusteradm CLI is installed and is newer than v0.3.0. Download and extract the clusteradm binary. For more details see the clusteradm GitHub page.

  1. Deploy the configuration policy controller to the managed clusters (this command is the same for a self-managed hub):

    # Deploy the configuration policy controller
    clusteradm addon enable addon --names config-policy-controller --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
    
  2. Ensure the pod is running on the managed cluster with the following command:

    $ kubectl get pods -n open-cluster-management-agent-addon
    NAME                                               READY   STATUS    RESTARTS   AGE
    config-policy-controller-7f8fb64d8c-pmfx4          1/1     Running   0          44s
    

Deploy from source

  1. Deploy the config-policy-controller to the managed cluster with the following commands:

    # The context name of the clusters in your kubeconfig
    # If the clusters are created by KinD, then the context name will the follow the pattern "kind-<cluster name>".
    export CTX_HUB_CLUSTER=<your hub cluster context>           # export CTX_HUB_CLUSTER=kind-hub
    export CTX_MANAGED_CLUSTER=<your managed cluster context>   # export CTX_MANAGED_CLUSTER=kind-cluster1
    
    # Configure kubectl to point to the managed cluster
    kubectl config use-context ${CTX_MANAGED_CLUSTER}
    
    # Create the namespace for the controller
    export MANAGED_NAMESPACE="open-cluster-management-agent-addon"
    kubectl create ns ${MANAGED_NAMESPACE}
    
    # Apply the CRD
    export COMPONENT="config-policy-controller"
    export GIT_PATH="https://raw.githubusercontent.com/open-cluster-management-io/${COMPONENT}/v0.12.0/deploy"
    kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_configurationpolicies.yaml
    kubectl apply -f ${GIT_PATH}/crds/policy.open-cluster-management.io_operatorpolicies.yaml
    
    # Set the managed cluster name
    export MANAGED_CLUSTER_NAME=<your managed cluster name>  # export MANAGED_CLUSTER_NAME=cluster1
    
    # Deploy the controller
    kubectl apply -f ${GIT_PATH}/operator.yaml -n ${MANAGED_NAMESPACE}
    kubectl set env deployment/${COMPONENT} -n ${MANAGED_NAMESPACE} --containers=${COMPONENT} WATCH_NAMESPACE=${MANAGED_CLUSTER_NAME}
    
  2. Ensure the pod is running on the managed cluster with the following command:

    $ kubectl get pods -n ${MANAGED_NAMESPACE}
    NAME                                               READY   STATUS    RESTARTS   AGE
    config-policy-controller-7f8fb64d8c-pmfx4          1/1     Running   0          44s
    

Sample configuration policy

After a successful deployment, test the policy framework and configuration policy controller with a sample policy. You can use a policy that includes a Placement mapping or if you installed Application management’s PlacementRule support you can use either placement implementation. Perform the steps in the Placement API or the Placement Rule API section based on which placement API you desire to use.

For more information on how to use a ConfigurationPolicy, read the Policy API concept section.

Placement API

  1. Run the following command to create a policy on the hub that uses Placement:

    # Configure kubectl to point to the hub cluster
    kubectl config use-context ${CTX_HUB_CLUSTER}
    
    # Apply the example policy and placement
    kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/community/CM-Configuration-Management/policy-pod-placement.yaml
    
  2. Update the Placement to distribute the policy to the managed cluster with the following command (this clusterSelector will deploy the policy to all managed clusters):

    kubectl patch -n default placement.cluster.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"predicates\":[{\"requiredClusterSelector\":{\"labelSelector\":{\"matchExpressions\":[]}}}]}}"
    
  3. Make sure the default namespace has a ManagedClusterSetBinding for a ManagedClusterSet with at least one managed cluster resource in the ManagedClusterSet. See Bind ManagedClusterSet to a namespace for more information on this.

  4. To confirm that the managed cluster is selected by the Placement, run the following command:

    $ kubectl get -n default placementdecision.cluster.open-cluster-management.io/placement-policy-pod-decision-1 -o yaml
    ...
    status:
      decisions:
      - clusterName: <managed cluster name>
        reason: ""
    ...
    

Placement Rule API

NOTE: Skip this section if you applied the Placement API policy manifests.

  1. Run the following command to create a policy on the hub that uses PlacementRule:

    # Configure kubectl to point to the hub cluster
    kubectl config use-context ${CTX_HUB_CLUSTER}
    
    # Apply the example policy and placement rule
    kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/stable/CM-Configuration-Management/policy-pod.yaml
    
  2. Update the PlacementRule to distribute the policy to the managed cluster with the following command (this clusterSelector will deploy the policy to all managed clusters):

    $ kubectl patch -n default placementrule.apps.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"clusterSelector\":{\"matchExpressions\":[]}}}"
    placementrule.apps.open-cluster-management.io/placement-policy-pod patched
    
  3. To confirm that the managed cluster is selected by the PlacementRule, run the following command:

    $ kubectl get -n default placementrule.apps.open-cluster-management.io/placement-policy-pod -o yaml
    ...
    status:
      decisions:
      - clusterName: ${MANAGED_CLUSTER_NAME}
        clusterNamespace: ${MANAGED_CLUSTER_NAME}
    ...
    

Final steps to apply the policy

Perform the following steps to continue working with the policy to test the policy framework now that a placement method has been selected between Placement or PlacementRule.

  1. Enforce the policy to make the configuration policy automatically correct any misconfigurations on the managed cluster:

    $ kubectl patch -n default policy.policy.open-cluster-management.io/policy-pod --type=merge -p "{\"spec\":{\"remediationAction\": \"enforce\"}}"
    policy.policy.open-cluster-management.io/policy-pod patched
    
  2. After a few seconds, your policy is propagated to the managed cluster. To confirm, run the following command:

    $ kubectl config use-context ${CTX_MANAGED_CLUSTER}
    $ kubectl get policy -A
    NAMESPACE   NAME                 REMEDIATION ACTION   COMPLIANCE STATE   AGE
    cluster1    default.policy-pod   enforce              Compliant          4m32s
    
  3. The missing pod is created by the policy on the managed cluster. To confirm, run the following command on the managed cluster:

    $ kubectl get pod -n default
    NAME               READY   STATUS    RESTARTS   AGE
    sample-nginx-pod   1/1     Running   0          23s
    

2 - Open Policy Agent Gatekeeper

Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper constraints can be used to evaluate Kubernetes resource compliance. You can leverage OPA as the policy engine, and use Rego as the policy language.

Installing Gatekeeper

See the Gatekeeper documentation to install the desired version of Gatekeeper to the managed cluster.

Sample Gatekeeper policy

Gatekeeper policies are written using constraint templates and constraints. View the following YAML examples that use Gatekeeper constraints in an OCM Policy:

  • ConstraintTemplates and constraints: Use the Gatekeeper integration feature by using OCM policies for multicluster distribution of Gatekeeper constraints and Gatekeeper audit results aggregation on the hub cluster. The following example defines a Gatekeeper ConstraintTemplate and constraint (K8sRequiredLabels) to ensure the “gatekeeper” label is set on all namespaces:

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: require-gatekeeper-labels-on-ns
    spec:
      remediationAction: inform # (1)
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: templates.gatekeeper.sh/v1beta1
            kind: ConstraintTemplate
            metadata:
              name: k8srequiredlabels
            spec:
              crd:
                spec:
                  names:
                    kind: K8sRequiredLabels
                  validation:
                    openAPIV3Schema:
                      properties:
                        labels:
                          type: array
                          items: string
              targets:
                - target: admission.k8s.gatekeeper.sh
                  rego: |
                    package k8srequiredlabels
                    violation[{"msg": msg, "details": {"missing_labels": missing}}] {
                      provided := {label | input.review.object.metadata.labels[label]}
                      required := {label | label := input.parameters.labels[_]}
                      missing := required - provided
                      count(missing) > 0
                      msg := sprintf("you must provide labels: %v", [missing])
                    }                
        - objectDefinition:
            apiVersion: constraints.gatekeeper.sh/v1beta1
            kind: K8sRequiredLabels
            metadata:
              name: ns-must-have-gk
            spec:
              enforcementAction: dryrun
              match:
                kinds:
                  - apiGroups: [""]
                    kinds: ["Namespace"]
              parameters:
                labels: ["gatekeeper"]
    
    1. Since the remediationAction is set to “inform”, the enforcementAction field of the Gatekeeper constraint is overridden to “warn”. This means that Gatekeeper detects and warns you about creating or updating a namespace that is missing the “gatekeeper” label. If the policy remediationAction is set to “enforce”, the Gatekeeper constraint enforcementAction field is overridden to “deny”. In this context, this configuration prevents any user from creating or updating a namespace that is missing the gatekeeper label.

    With the previous policy, you might receive the following policy status message:

    warn - you must provide labels: {“gatekeeper”} (on Namespace default); warn - you must provide labels: {“gatekeeper”} (on Namespace gatekeeper-system).

    Once a policy containing Gatekeeper constraints or ConstraintTemplates is deleted, the constraints and ConstraintTemplates are also deleted from the managed cluster.

    Notes:

    • The Gatekeeper audit functionality runs every minute by default. Audit results are sent back to the hub cluster to be viewed in the OCM policy status of the managed cluster.
  • Auditing Gatekeeper events: The following example uses an OCM configuration policy within an OCM policy to check for Kubernetes API requests denied by the Gatekeeper admission webhook:

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-gatekeeper-admission
    spec:
      remediationAction: inform
      disabled: false
      policy-templates:
        - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: policy-gatekeeper-admission
          spec:
            remediationAction: inform # will be overridden by remediationAction in parent policy
            severity: low
            object-templates:
              - complianceType: mustnothave
                objectDefinition:
                  apiVersion: v1
                  kind: Event
                  metadata:
                    namespace: gatekeeper-system # set it to the actual namespace where gatekeeper is running if different
                    annotations:
                      constraint_action: deny
                      constraint_kind: K8sRequiredLabels
                      constraint_name: ns-must-have-gk
                      event_type: violation