1 - Policy
Overview
Note: this is also covered in the
Open Cluster Management - Configuring Your Kubernetes Fleet With the Policy Addon
video.
The policy framework has the following API concepts:
- Policy Templates are the policies that perform a desired check or action. For
example,
ConfigurationPolicy
objects are embedded in
Policy
objects under the policy-templates
array.
- A
Policy
is a grouping mechanism for Policy Templates and is the smallest deployable unit on the hub
cluster. Embedded Policy Templates are distributed to applicable managed clusters and acted upon by the appropriate
policy controller.
- A
PolicySet
is a grouping mechanism of Policy
objects. Compliance of all grouped Policy
objects is
summarized in the PolicySet
. A PolicySet
is a deployable unit and its distribution is controlled by a
Placement.
- A
PlacementBinding
binds a Placement to a Policy
or PolicySet
.
The second half of the
KubeCon NA 2022 - OCM Multicluster App & Config Management
also covers an overview of the Policy addon.
Policy
A Policy
is a grouping mechanism for Policy Templates and is the smallest deployable unit on the hub cluster.
Embedded Policy Templates are distributed to applicable managed clusters and acted upon by the appropriate
policy controller. The compliance state and status of a Policy
represents all embedded Policy Templates in the Policy
. The distribution of Policy
objects is controlled by a
Placement.
View a simple example of a Policy
that embeds a ConfigurationPolicy
policy template to manage a namespace called
“prod”.
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-namespace
namespace: policies
annotations:
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-namespace-example
spec:
remediationAction: inform
severity: low
object-templates:
- complianceType: musthave
objectDefinition:
kind: Namespace # must have namespace 'prod'
apiVersion: v1
metadata:
name: prod
The annotations
are standard annotations for informational purposes and can be used by user interfaces, custom report
scripts, or components that integrate with OCM.
The optional spec.remediationAction
field dictates whether the policy controller should inform
or enforce
when
violations are found and overrides the remediationAction
field on each policy template. When set to inform
, the
Policy
will become noncompliant if the underlying policy templates detect that the desired state is not met. When set
to enforce
, the policy controller applies the desired state when necessary and feasible.
The policy-templates
array contains an array of Policy Templates. Here a
single ConfigurationPolicy
called policy-namespace-example
defines a Namespace
manifest to compare with objects on
the cluster. It has the remediationAction
set to inform
but it is overridden by the optional global
spec.remediationAction
. The severity
is for informational purposes similar to the annotations
.
Inside of the embedded ConfigurationPolicy
, the object-templates
section describes the prod
Namespace
object
that the ConfigurationPolicy
applies to. The action that the ConfigurationPolicy
will take is determined by the
complianceType
. In this case, it is set to musthave
which means the prod
Namespace
object will be created if it
doesn’t exist. Other compliance types include mustnothave
and mustonlyhave
. mustnothave
would delete the prod
Namespace
object. mustonlyhave
would ensure the prod
Namespace
object only exists with the fields defined in the
ConfigurationPolicy
. See the
ConfigurationPolicy
page for more information
or see the templating in configuration policies topic for advanced templating
use cases with ConfigurationPolicy
.
When the Policy
is bound to a Placement
using a PlacementBinding
, the
Policy
status will report on each cluster that matches the bound Placement
:
status:
compliant: Compliant
placement:
- placement: placement-hub-cluster
placementBinding: binding-policy-namespace
status:
- clustername: local-cluster
clusternamespace: local-cluster
compliant: Compliant
To fully explore the Policy
API, run the following command:
kubectl get crd policies.policy.open-cluster-management.io -o yaml
To fully explore the ConfigurationPolicy
API, run the following command:
kubectl get crd configurationpolicies.policy.open-cluster-management.io -o yaml
PlacementBinding
A PlacementBinding
binds a Placement to a Policy
or PolicySet
.
Below is an example of a PlacementBinding
that binds the policy-namespace
Policy
to the placement-hub-cluster
Placement
.
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-policy-namespace
namespace: policies
placementRef:
apiGroup: cluster.open-cluster-management.io
kind: Placement
name: placement-hub-cluster
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: policy-namespace
Once the Policy
is bound, it will be distributed to and acted upon by the managed clusters that match the Placement
.
PolicySet
A PolicySet
is a grouping mechanism of Policy
objects. Compliance of all grouped Policy
objects is
summarized in the PolicySet
. A PolicySet
is a deployable unit and its distribution is controlled by a
Placement when bound through a PlacementBinding
.
This enables a workflow where subject matter experts write Policy
objects and then an IT administrator creates a
PolicySet
that groups the previously written Policy
objects and binds the PolicySet
to a Placement
that deploys
the PolicySet
.
An example of a PolicySet
is shown below.
apiVersion: policy.open-cluster-management.io/v1beta1
kind: PolicySet
metadata:
name: ocm-hardening
namespace: policies
spec:
description: Apply standard best practices for hardening your Open Cluster Management installation.
policies:
- policy-check-backups
- policy-managedclusteraddon-available
- policy-subscriptions
Managed cluster policy controllers
The Policy
on the hub delivers the policies defined in spec.policy-templates
to the managed clusters via
the policy framework controllers. Once on the managed cluster, these Policy Templates are acted upon by the associated
controller on the managed cluster. The policy framework supports delivering the Policy Template kinds listed here:
-
Configuration policy
The ConfigurationPolicy
is provided by OCM and defines Kubernetes manifests to compare with objects that currently
exist on the cluster. The action that the ConfigurationPolicy
will take is determined by its complianceType
.
Compliance types include musthave
, mustnothave
, and mustonlyhave
. musthave
means the object should have the
listed keys and values as a subset of the larger object. mustnothave
means an object matching the listed keys and
values should not exist. mustonlyhave
ensures objects only exist with the keys and values exactly as defined. See
the page on Configuration Policy for more
information.
-
Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities that can enforce custom resource definition-based
policies that are run with the Open Policy Agent (OPA). Gatekeeper ConstraintTemplates
and constraints can be
provided in an OCM Policy
to sync to managed clusters that have Gatekeeper installed on them. See the page on
Gatekeeper integration for more information.
Templating in configuration policies
Configuration policies support the inclusion of Golang text templates in the object definitions. These templates are
resolved at runtime either on the hub cluster or the target managed cluster using configurations related to that
cluster. This gives you the ability to define configuration policies with dynamic content and to inform or enforce
Kubernetes resources that are customized to the target cluster.
The template syntax must follow the Golang template language specification, and the resource definition generated from
the resolved template must be a valid YAML. (See the
Golang documentation about package templates for more information.) Any errors
in template validation appear as policy violations. When you use a custom template function, the values are replaced at
runtime.
Template functions, such as resource-specific and generic lookup
template functions, are available for referencing
Kubernetes resources on the hub cluster (using the {{hub ... hub}}
delimiters), or managed cluster (using the
{{ ... }}
delimiters). See the Hub cluster templates section for more details. The
resource-specific functions are used for convenience and makes content of the resources more accessible. If you use the
generic function, lookup
, which is more advanced, it is best to be familiar with the YAML structure of the resource
that is being looked up. In addition to these functions, utility functions like base64encode
, base64decode
,
indent
, autoindent
, toInt
, and toBool
are also available.
To conform templates with YAML syntax, templates must be set in the policy resource as strings using quotes or a block
character (|
or >
). This causes the resolved template value to also be a string. To override this, consider using
toInt
or toBool
as the final function in the template to initiate further processing that forces the value to be
interpreted as an integer or boolean respectively.
To bypass template processing you can either:
- Override a single template by wrapping the template in additional braces. For example, the template
{{ template content }}
would become {{ '{{ template content }}' }}
.
- Override all templates in a
ConfigurationPolicy
by adding the
policy.open-cluster-management.io/disable-templates: "true"
annotation in the ConfigurationPolicy
section of your
Policy
. Template processing will be bypassed for that ConfigurationPolicy
.
Hub cluster templating in configuration policies
Hub cluster templates are used to define configuration policies that are dynamically customized to the target cluster.
This reduces the need to create separate policies for each target cluster or hardcode configuration values in the policy
definitions.
Hub cluster templates are based on Golang text template specifications, and the {{hub … hub}}
delimiter indicates a
hub cluster template in a configuration policy.
A configuration policy definition can contain both hub cluster and managed cluster templates. Hub cluster templates are
processed first on the hub cluster, then the policy definition with resolved hub cluster templates is propagated to the
target clusters. On the managed cluster, the Configuration Policy controller processes any managed cluster templates in
the policy definition and then enforces or verifies the fully resolved object definition.
In OCM versions 0.9.x and older, policies are processed on the hub cluster only upon creation or after an update.
Therefore, hub cluster templates are only resolved to the data in the referenced resources upon policy creation or
update. Any changes to the referenced resources are not automatically synced to the policies.
A special annotation, policy.open-cluster-management.io/trigger-update
can be used to indicate changes to the data
referenced by the templates. Any change to the special annotation value initiates template processing, and the latest
contents of the referenced resource are read and updated in the policy definition that is the propagator for processing
on managed clusters. A typical way to use this annotation is to increment the value by one each time.
Templating value encryption
The encryption algorithm uses AES-CBC with 256-bit keys. Each encryption key is unique per managed cluster and is
automatically rotated every 30 days. This ensures that your decrypted value is never stored in the policy on the managed
cluster.
To force an immediate encryption key rotation, delete the policy.open-cluster-management.io/last-rotated
annotation on
the policy-encryption-key
Secret in the managed cluster namespace on the hub cluster. Policies are then reprocessed to
use the new encryption key.
Templating functions
Function |
Description |
Sample |
fromSecret |
Returns the value of the given data key in the secret. |
PASSWORD: '{{ fromSecret "default" "localsecret" "PASSWORD" }}' |
fromConfigmap |
Returns the value of the given data key in the ConfigMap. |
log-file: '{{ fromConfigMap "default" "logs-config" "log-file" }}' |
fromClusterClaim |
Returns the value of spec.value in the ClusterClaim resource. |
platform: '{{ fromClusterClaim "platform.open-cluster-management.io" }}' |
lookup |
Returns the Kubernetes resource as a JSON compatible map. Note that if the requested resource does not exist, an empty map is returned. |
metrics-url: |
http://{{ (lookup "v1" "Service" "default" "metrics").spec.clusterIP }}:8080 |
base64enc |
Returns a base64 encoded value of the input string. |
USER_NAME: '{{ fromConfigMap "default" "myconfigmap" "admin-user" | base64enc }}' |
base64dec |
Returns a base64 decoded value of the input string. |
app-name: |
"{{ ( lookup "v1" "Secret" "testns" "mytestsecret") .data.appname ) | base64dec }}" |
indent |
Returns the input string indented by the given number of spaces. |
Ca-cert: |
{{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | indent 4 }} |
autoindent |
Acts like the indent function but automatically determines the number of leading spaces needed based on the number of spaces before the template. |
Ca-cert: |
{{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | autoindent }} |
toInt |
Returns the integer value of the string and ensures that the value is interpreted as an integer in the YAML. |
vlanid: |
{{ (fromConfigMap "site-config" "site1" "vlan") | toInt }} |
toBool |
Returns the boolean value of the input string and ensures that the value is interpreted as a boolean in the YAML. |
enabled: |
{{ (fromConfigMap "site-config" "site1" "enabled") | toBool }} |
protect |
Encrypts the input string. It is decrypted when the policy is evaluated. On the replicated policy in the managed cluster namespace, the resulting value resembles the following: $ocm_encrypted:<encrypted-value> |
enabled: |
{{hub "(lookup "route.openshift.io/v1" "Route" "openshift-authentication" "oauth-openshift").spec.host | protect hub}} |
Additionally, OCM supports the following template functions that are included from the sprig
open source project:
cat
contains
default
empty
fromJson
hasPrefix
hasSuffix
join
list
lower
mustFromJson
quote
replace
semver
semverCompare
split
splitn
ternary
trim
until
untilStep
upper
See the Sprig documentation for more details.
2 - Policy framework
The policy framework provides governance capabilities to OCM managed Kubernetes clusters. Policies provide visibility
and drive remediation for various security and configuration aspects to help IT administrators meet their requirements.
API Concepts
View the Policy API page for additional details about the Policy API managed by the Policy Framework
components, including:
Architecture
The governance policy framework distributes policies to managed clusters and collects results to send back to the hub
cluster.
Prerequisite
You must meet the following prerequisites to install the policy framework:
-
Ensure kubectl
and
kustomize
are installed.
-
Ensure the open-cluster-management
cluster manager is installed. See
Start the control plane for more information.
-
Ensure the open-cluster-management
klusterlet is installed. See
Register a cluster for more information.
-
If you are using PlacementRules
with your policies, ensure the open-cluster-management
application is installed
. See Application management for more information. If you are using the
default Placement
API, you can skip the Application management installation, but you do need to install the
PlacementRule
CRD with this command:
kubectl apply -f https://raw.githubusercontent.com/open-cluster-management-io/multicloud-operators-subscription/main/deploy/hub-common/apps.open-cluster-management.io_placementrules_crd.yaml
Install the governance-policy-framework hub components
Install via Clusteradm CLI
Ensure clusteradm
CLI is installed and is at least v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
-
Deploy the policy framework controllers to the hub cluster:
# The context name of the clusters in your kubeconfig
# If the clusters are created by KinD, then the context name will the follow the pattern "kind-<cluster name>".
export CTX_HUB_CLUSTER=<your hub cluster context> # export CTX_HUB_CLUSTER=kind-hub
export CTX_MANAGED_CLUSTER=<your managed cluster context> # export CTX_MANAGED_CLUSTER=kind-cluster1
# Set the deployment namespace
export HUB_NAMESPACE="open-cluster-management"
# Deploy the policy framework hub controllers
clusteradm install hub-addon --names governance-policy-framework --context ${CTX_HUB_CLUSTER}
-
Ensure the pods are running on the hub with the following command:
$ kubectl get pods -n ${HUB_NAMESPACE}
NAME READY STATUS RESTARTS AGE
governance-policy-addon-controller-bc78cbcb4-529c2 1/1 Running 0 94s
governance-policy-propagator-8c77f7f5f-kthvh 1/1 Running 0 94s
- See more about the governance-policy-framework components:
Deploy the synchronization components to the managed cluster(s)
Deploy via Clusteradm CLI
-
To deploy the synchronization components to a self-managed hub cluster:
clusteradm addon enable --names governance-policy-framework --clusters <managed_hub_cluster_name> --annotate addon.open-cluster-management.io/on-multicluster-hub=true --context ${CTX_HUB_CLUSTER}
To deploy the synchronization components to a managed cluster:
clusteradm addon enable --names governance-policy-framework --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
-
Verify that the
governance-policy-framework-addon controller
pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon
NAME READY STATUS RESTARTS AGE
governance-policy-framework-addon-57579b7c-652zj 1/1 Running 0 87s
What is next
Install the policy controllers to the managed clusters.
3 - Configuration Policy
The ConfigurationPolicy
defines Kubernetes manifests to compare with objects that currently exist on the cluster. The
Configuration policy controller is provided by Open Cluster Management and runs on managed clusters.
Prerequisites
You must meet the following prerequisites to install the configuration policy controller:
-
Ensure kubectl
and
kustomize
are installed.
-
Ensure Golang is installed, if you are planning to install from the source.
-
Ensure the open-cluster-management
policy framework is installed. See
Policy Framework for more information.
Installing the configuration policy controller
Deploy via Clusteradm CLI
Ensure clusteradm
CLI is installed and is newer than v0.3.0. Download and extract the
clusteradm binary. For more details see the
clusteradm GitHub page.
-
Deploy the configuration policy controller to the managed clusters (this command is the same for a self-managed hub):
# Deploy the configuration policy controller
clusteradm addon enable addon --names config-policy-controller --clusters <cluster_name> --context ${CTX_HUB_CLUSTER}
-
Ensure the pod is running on the managed cluster with the following command:
$ kubectl get pods -n open-cluster-management-agent-addon
NAME READY STATUS RESTARTS AGE
config-policy-controller-7f8fb64d8c-pmfx4 1/1 Running 0 44s
Sample configuration policy
After a successful deployment, test the policy framework and configuration policy controller with a sample policy.
For more information on how to use a ConfigurationPolicy
, read the
Policy
API concept section.
-
Run the following command to create a policy on the hub that uses Placement
:
# Configure kubectl to point to the hub cluster
kubectl config use-context ${CTX_HUB_CLUSTER}
# Apply the example policy and placement
kubectl apply -n default -f https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/community/CM-Configuration-Management/policy-pod-placement.yaml
-
Update the Placement
to distribute the policy to the managed cluster with the following command (this
clusterSelector
will deploy the policy to all managed clusters):
kubectl patch -n default placement.cluster.open-cluster-management.io/placement-policy-pod --type=merge -p "{\"spec\":{\"predicates\":[{\"requiredClusterSelector\":{\"labelSelector\":{\"matchExpressions\":[]}}}]}}"
-
Make sure the default
namespace has a ManagedClusterSetBinding
for a ManagedClusterSet
with at least one
managed cluster resource in the ManagedClusterSet
. See
Bind ManagedClusterSet to a namespace for more
information on this.
-
To confirm that the managed cluster is selected by the Placement
, run the following command:
$ kubectl get -n default placementdecision.cluster.open-cluster-management.io/placement-policy-pod-decision-1 -o yaml
...
status:
decisions:
- clusterName: <managed cluster name>
reason: ""
...
-
Enforce the policy to make the configuration policy automatically correct any misconfigurations on the managed
cluster:
$ kubectl patch -n default policy.policy.open-cluster-management.io/policy-pod --type=merge -p "{\"spec\":{\"remediationAction\": \"enforce\"}}"
policy.policy.open-cluster-management.io/policy-pod patched
-
After a few seconds, your policy is propagated to the managed cluster. To confirm, run the following command:
$ kubectl config use-context ${CTX_MANAGED_CLUSTER}
$ kubectl get policy -A
NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE
cluster1 default.policy-pod enforce Compliant 4m32s
-
The missing pod is created by the policy on the managed cluster. To confirm, run the following command on the managed
cluster:
$ kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
sample-nginx-pod 1/1 Running 0 23s
4 - Open Policy Agent Gatekeeper
Gatekeeper is a validating webhook with auditing capabilities
that can enforce custom resource definition-based policies that are run with the Open Policy Agent (OPA). Gatekeeper
constraints can be used to evaluate Kubernetes resource compliance. You can leverage OPA as the policy engine, and use
Rego as the policy language.
Installing Gatekeeper
See the Gatekeeper documentation to install the
desired version of Gatekeeper to the managed cluster.
Sample Gatekeeper policy
Gatekeeper policies are written using constraint templates and constraints. View the following YAML examples that use
Gatekeeper constraints in an OCM Policy
:
-
ConstraintTemplates
and constraints: Use the Gatekeeper integration feature by using OCM policies for multicluster
distribution of Gatekeeper constraints and Gatekeeper audit results aggregation on the hub cluster. The following
example defines a Gatekeeper ConstraintTemplate
and constraint (K8sRequiredLabels
) to ensure the “gatekeeper”
label is set on all namespaces:
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: require-gatekeeper-labels-on-ns
spec:
remediationAction: inform # (1)
disabled: false
policy-templates:
- objectDefinition:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
- objectDefinition:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
- Since the remediationAction is set to “inform”, the
enforcementAction
field of the Gatekeeper constraint is
overridden to “warn”. This means that Gatekeeper detects and warns you about creating or updating a namespace that
is missing the “gatekeeper” label. If the policy remediationAction
is set to “enforce”, the Gatekeeper constraint
enforcementAction
field is overridden to “deny”. In this context, this configuration prevents any user from
creating or updating a namespace that is missing the gatekeeper label.
With the previous policy, you might receive the following policy status message:
warn - you must provide labels: {“gatekeeper”} (on Namespace default); warn - you must provide labels:
{“gatekeeper”} (on Namespace gatekeeper-system).
Once a policy containing Gatekeeper constraints or ConstraintTemplates
is deleted, the constraints and
ConstraintTemplates
are also deleted from the managed cluster.
Notes:
- The Gatekeeper audit functionality runs every minute by default. Audit results are sent back to the hub cluster to
be viewed in the OCM policy status of the managed cluster.
-
Auditing Gatekeeper events: The following example uses an OCM
configuration policy within an OCM policy to check for Kubernetes
API requests denied by the Gatekeeper admission webhook:
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-gatekeeper-admission
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-gatekeeper-admission
spec:
remediationAction: inform # will be overridden by remediationAction in parent policy
severity: low
object-templates:
- complianceType: mustnothave
objectDefinition:
apiVersion: v1
kind: Event
metadata:
namespace: gatekeeper-system # set it to the actual namespace where gatekeeper is running if different
annotations:
constraint_action: deny
constraint_kind: K8sRequiredLabels
constraint_name: ns-must-have-gk
event_type: violation