Policy controllers
The Policy API on the hub delivers the policies defined in spec.policy-templates
to the managed
clusters via the policy framework controllers. Once on the managed
cluster, these Policy Templates are acted upon by the associated controller on the managed cluster. The policy
framework supports delivering the Policy Template kinds listed.
The ConfigurationPolicy
is provided by OCM and defines Kubernetes manifests to compare with objects that currently
exist on the cluster. The action that the ConfigurationPolicy
will take is determined by its complianceType
.
Compliance types include musthave
, mustnothave
, and mustonlyhave
. musthave
means the object should have the
listed keys and values as a subset of the larger object. mustnothave
means an object matching the listed keys and
values should not exist. mustonlyhave
ensures objects only exist with the keys and values exactly as defined.
Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based
policies that are run with the Open Policy Agent (OPA). Gatekeeper ConstraintTemplates
and constraints can be
provided in an OCM Policy
to sync to managed clusters that have Gatekeeper installed on them.
1 - Configuration Policy
The ConfigurationPolicy
defines Kubernetes manifests to compare with objects that currently exist on the cluster. The
Configuration policy controller is provided by Open Cluster Management and runs on managed clusters.
Prerequisites
You must meet the following prerequisites to install the configuration policy controller:
Ensure kubectl
and
kustomize
are installed.
Ensure Golang is installed, if you are planning to install from the source.
Ensure the open-cluster-management
policy framework is installed. See
Policy Framework for more information.
Installing the configuration policy controller
Deploy via Clusteradm CLI
Ensure clusteradm
CLI is installed and is newer than v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
Deploy the configuration policy controller to the managed clusters (this command is the same for a self-managed hub):
# Deploy the configuration policy controller
clusteradm addon enable addon --names config-policy-controller --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
Ensure the pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon
NAME READY STATUS RESTARTS AGE
config-policy-controller-7f8fb64d8c-pmfx4 1/1 Running 0 44s
Sample configuration policy
After a successful deployment, test the policy framework and configuration policy controller with a sample policy.
For more information on how to use a ConfigurationPolicy
, read the
Policy
API concept section.
Run the following command to create a policy on the hub that uses Placement
:
# Configure kubectl to point to the hub cluster
kubectl config use-context ${CTX_HUB_CLUSTER}
# Apply the example policy and placement
kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/community/CM-Configuration-Management/policy-pod-placement.yaml
Update the Placement
to distribute the policy to the managed cluster with the following command (this
clusterSelector
will deploy the policy to all managed clusters):
kubectl patch -n default placement.cluster.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"predicates\":[{\"requiredClusterSelector\":{\"labelSelector\":{\"matchExpressions\":[]}}}]}}"
Make sure the default
namespace has a ManagedClusterSetBinding
for a ManagedClusterSet
with at least one
managed cluster resource in the ManagedClusterSet
. See
Bind ManagedClusterSet to a namespace for more
information on this.
To confirm that the managed cluster is selected by the Placement
, run the following command:
$ kubectl get -n default placementdecision.cluster.open-cluster-management.io/placement-policy-pod-decision-1 -o yaml
...
status:
decisions:
- clusterName: <managed cluster name>
reason: ""
...
Enforce the policy to make the configuration policy automatically correct any misconfigurations on the managed
cluster:
$ kubectl patch -n default policy.policy.open-cluster-management.io/policy-pod --type=merge -p "{\"spec\":{\"remediationAction\": \"enforce\"}}"
policy.policy.open-cluster-management.io/policy-pod patched
After a few seconds, your policy is propagated to the managed cluster. To confirm, run the following command:
$ kubectl config use-context ${CTX_MANAGED_CLUSTER}
$ kubectl get policy -A
NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE
cluster1 default.policy-pod enforce Compliant 4m32s
The missing pod is created by the policy on the managed cluster. To confirm, run the following command on the managed
cluster:
$ kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
sample-nginx-pod 1/1 Running 0 23s
2 - Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities
that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper
constraints can be used to evaluate Kubernetes resource compliance. You can leverage OPA as the policy engine, and use
Rego as the policy language.
Installing Gatekeeper
See the Gatekeeper documentation to install the
desired version of Gatekeeper to the managed cluster.
Sample Gatekeeper policy
Gatekeeper policies are written using constraint templates and constraints. View the following YAML examples that use
Gatekeeper constraints in an OCM Policy
:
ConstraintTemplates
and constraints: Use the Gatekeeper integration feature by using OCM policies for multicluster
distribution of Gatekeeper constraints and Gatekeeper audit results aggregation on the hub cluster. The following
example defines a Gatekeeper ConstraintTemplate
and constraint (K8sRequiredLabels
) to ensure the “gatekeeper”
label is set on all namespaces:
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: require-gatekeeper-labels-on-ns
spec:
remediationAction: inform # (1)
disabled: false
policy-templates:
- objectDefinition:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
- objectDefinition:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
- Since the remediationAction is set to “inform”, the
enforcementAction
field of the Gatekeeper constraint is
overridden to “warn”. This means that Gatekeeper detects and warns you about creating or updating a namespace that
is missing the “gatekeeper” label. If the policy remediationAction
is set to “enforce”, the Gatekeeper constraint
enforcementAction
field is overridden to “deny”. In this context, this configuration prevents any user from
creating or updating a namespace that is missing the gatekeeper label.
With the previous policy, you might receive the following policy status message:
warn - you must provide labels: {“gatekeeper”} (on Namespace default); warn - you must provide labels:
{“gatekeeper”} (on Namespace gatekeeper-system).
Once a policy containing Gatekeeper constraints or ConstraintTemplates
is deleted, the constraints and
ConstraintTemplates
are also deleted from the managed cluster.
Notes:
- The Gatekeeper audit functionality runs every minute by default. Audit results are sent back to the hub cluster to
be viewed in the OCM policy status of the managed cluster.
Auditing Gatekeeper events: The following example uses an OCM
configuration policy within an OCM policy to check for Kubernetes
API requests denied by the Gatekeeper admission webhook:
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-gatekeeper-admission
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-gatekeeper-admission
spec:
remediationAction: inform # will be overridden by remediationAction in parent policy
severity: low
object-templates:
- complianceType: mustnothave
objectDefinition:
apiVersion: v1
kind: Event
metadata:
namespace: gatekeeper-system # set it to the actual namespace where gatekeeper is running if different
annotations:
constraint_action: deny
constraint_kind: K8sRequiredLabels
constraint_name: ns-must-have-gk
event_type: violation