5.2.2 Policy Engine¶
Kyverno is a Kubernetes-native policy engine that acts as an admission controller, validating, mutating, and generating resources based on policies — enforcing security guardrails and governance rules before non-compliant resources reach the cluster.
How to use this page
Each component has an Install section showing the Flux HelmRelease, a Configuration section with Helm values, and a Verify section to confirm it is working.
All code blocks are labelled with their file path in the repository. Select your target environment (AWS or Bare Metal) in any tab group — the choice syncs across the entire page.
- Using the existing
rciis-devopsrepository: All files already exist. Skip themkdirandgit add/git commitcommands — they are for users building a new repository. Simply review the files, edit values for your environment, and push. - Building a new repository from scratch: Follow the
mkdir, file creation, andgitcommands in order. - No Git access: Expand the "Alternative: Flux CLI" block under each Install section.
Kyverno¶
Kyverno is a policy engine deployed as a Kubernetes admission controller. It enforces policies that validate container images, require resource limits and labels, generate default network policies, and verify image signatures — protecting the cluster from non-compliant, unsigned, or vulnerable workloads.
Kyverno runs in the kyverno namespace with multiple replicas for high availability.
On AWS, the resource requests are reduced to optimize for smaller instance types.
On Bare Metal, policies include domain-specific rules for RCIIS applications.
Install¶
The base HelmRelease tells Flux which chart to install. This file is shared across all environments — environment-specific settings are applied via patches (shown in the Configuration section).
Create the base directory and file:
| Field | Value | Explanation |
|---|---|---|
chart |
kyverno |
The Helm chart name from the Kyverno registry |
version |
3.3.7 |
Pinned chart version — update this to upgrade Kyverno |
sourceRef.name |
kyverno |
References a HelmRepository CR pointing to https://kyverno.github.io/kyverno |
targetNamespace |
kyverno |
Kyverno runs in its own namespace with dedicated RBAC |
dependsOn[0].name |
cert-manager |
Kyverno webhooks require TLS certificates from cert-manager |
crds: CreateReplace |
— | Automatically installs and updates Kyverno policy CRDs |
remediation.retries |
3 |
Flux retries up to 3 times if the install or upgrade fails |
Save the following as flux/infra/base/kyverno.yaml:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: kyverno
namespace: flux-system
spec:
dependsOn:
- name: cert-manager
targetNamespace: kyverno
interval: 30m
chart:
spec:
chart: kyverno
version: "3.3.7"
sourceRef:
kind: HelmRepository
name: kyverno
namespace: flux-system
releaseName: kyverno
install:
createNamespace: true
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
remediation:
retries: 3
values:
admissionController:
replicas: 3
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/component: admission-controller
container:
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus
backgroundController:
replicas: 2
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
webhookAnnotations:
admissionregistration.k8s.io/timeout: "10"
config:
webhooks:
- objectSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- kube-system
- kyverno
Alternative: Flux CLI
If you do not have Git access, install Kyverno directly with Flux:
Configuration¶
The environment patch overrides the base HelmRelease with cluster-specific settings. On AWS, the patch reduces replicas and resource requests to optimize for cost and instance size. On Bare Metal, the patch includes additional Kyverno controller configurations.
Create the environment overlay directory:
Environment Patch¶
Save the following as the patch file for your environment. On AWS, the patch overrides the chart version and reduces replica counts and resource usage:
On AWS, Kyverno runs with reduced resources and fewer replicas to minimize operational overhead on smaller clusters. The patch upgrades the chart version to 3.7.1 for additional fixes and features.
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: kyverno
spec:
chart:
spec:
version: "3.7.1"
values:
admissionController:
replicas: 1
topologySpreadConstraints: []
container:
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
backgroundController:
replicas: 1
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 250m
memory: 128Mi
config:
webhooks: []
excludeKyvernoNamespace: true
resourceFiltersExcludeNamespaces:
- kube-system
- kyverno
| Setting | Value | Why |
|---|---|---|
chart.spec.version |
3.7.1 |
AWS uses a newer patch version for additional stability and features |
admissionController.replicas |
1 |
Single replica on AWS to reduce cost and memory footprint |
admissionController.topologySpreadConstraints |
[] |
Topology spreading disabled for smaller clusters |
backgroundController.replicas |
1 |
Reduced from 2 to 1 replica for cost optimization |
container.resources.requests.cpu |
50m |
Half the base CPU request for smaller instance types |
config.webhooks |
[] |
Empty webhook config to use defaults suitable for AWS |
On Bare Metal (proxmox), Kyverno uses the base configuration as-is. No patch is required. The base chart version (3.3.7) and full HA setup are suitable for on-premises deployments.
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: kyverno
spec:
# Base configuration from flux/infra/base/kyverno.yaml is used as-is
# No overrides required for bare metal environment
values: {}
Bare Metal Configuration
Bare Metal environments use the full base configuration with 3 admission controller
replicas and 2 background controller replicas. All Kyverno policies defined in
flux/infra/baremetal/kyverno/ are applied via Kustomization.
On Bare Metal (proxmox), Kyverno uses the base configuration as-is. No patch is required. The base chart version (3.3.7) and full HA setup are suitable for on-premises deployments.
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: kyverno
spec:
# Base configuration from flux/infra/base/kyverno.yaml is used as-is
# No overrides required for bare metal environment
values: {}
Bare Metal Configuration
Bare Metal environments use the full base configuration with 3 admission controller
replicas and 2 background controller replicas. All Kyverno policies defined in
flux/infra/baremetal/kyverno/ are applied via Kustomization.
Kustomization Files¶
Create the Kustomization file that references the base HelmRelease and applies the patch:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/kyverno.yaml
- policy-image-allowlist.yaml
- policy-pod-security.yaml
- policy-require-resources.yaml
- policy-require-labels.yaml
- policy-generate-netpol.yaml
Bare Metal Policy Resources
The Bare Metal Kustomization includes Kyverno ClusterPolicy resources that enforce image registry restrictions, Pod Security Standards, resource limits, and standard labels. These policies are defined in separate YAML files in the same directory. See Kyverno Policies below for details.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/kyverno.yaml
- policy-image-allowlist.yaml
- policy-pod-security.yaml
- policy-require-resources.yaml
- policy-require-labels.yaml
- policy-generate-netpol.yaml
Bare Metal Policy Resources
The Bare Metal Kustomization includes Kyverno ClusterPolicy resources that enforce image registry restrictions, Pod Security Standards, resource limits, and standard labels. These policies are defined in separate YAML files in the same directory. See Kyverno Policies below for details.
Core Kyverno Policies¶
Bare Metal deployments include a set of core policies that enforce security and governance standards. These policies are defined as ClusterPolicy resources and are applied via the Kustomization.
Policy: Image Registry Allowlist¶
Restrict container images to trusted registries only:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registries
annotations:
policies.kyverno.io/title: Restrict Image Registries
policies.kyverno.io/category: Supply Chain Security
policies.kyverno.io/severity: high
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: >
Image is not from an approved registry. Allowed registries:
harbor.devops.africa, docker.io/library, ghcr.io/aquasecurity,
quay.io/argoproj, registry.k8s.io.
pattern:
spec:
containers:
- image: >-
harbor.devops.africa/* |
docker.io/library/* |
ghcr.io/aquasecurity/* |
quay.io/argoproj/* |
registry.k8s.io/*
Policy: Pod Security Standards¶
Enforce the Kubernetes restricted Pod Security Standard:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-pod-security-restricted
annotations:
policies.kyverno.io/title: Enforce Pod Security Restricted
policies.kyverno.io/category: Pod Security
policies.kyverno.io/severity: high
spec:
validationFailureAction: Enforce
background: true
rules:
- name: disallow-privileged
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Privileged containers are not allowed."
pattern:
spec:
containers:
- =(securityContext):
=(privileged): false
- name: disallow-host-namespaces
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Host namespaces (hostNetwork, hostPID, hostIPC) are not allowed."
pattern:
spec:
=(hostNetwork): false
=(hostPID): false
=(hostIPC): false
- name: require-run-as-non-root
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Containers must run as non-root."
pattern:
spec:
containers:
- securityContext:
runAsNonRoot: true
Policy: Require Resource Limits¶
Ensure all containers specify resource requests and limits:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
annotations:
policies.kyverno.io/title: Require Resource Limits
policies.kyverno.io/category: Resource Management
policies.kyverno.io/severity: medium
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-resources
match:
any:
- resources:
kinds:
- Pod
validate:
message: "All containers must specify resource requests and limits."
pattern:
spec:
containers:
- resources:
requests:
memory: "?*"
cpu: "?*"
limits:
memory: "?*"
Policy: Require Standard Labels¶
Enforce standard labels on all workloads:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
annotations:
policies.kyverno.io/title: Require Standard Labels
policies.kyverno.io/category: Governance
policies.kyverno.io/severity: medium
spec:
validationFailureAction: Audit
background: true
rules:
- name: require-app-labels
match:
any:
- resources:
kinds:
- Deployment
- StatefulSet
- DaemonSet
validate:
message: >
Resources must have labels 'app.kubernetes.io/name'
and 'app.kubernetes.io/part-of'.
pattern:
metadata:
labels:
app.kubernetes.io/name: "?*"
app.kubernetes.io/part-of: "?*"
Policy: Generate Default NetworkPolicy¶
Auto-create a default-deny NetworkPolicy in every new namespace:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-default-deny-netpol
annotations:
policies.kyverno.io/title: Generate Default-Deny NetworkPolicy
policies.kyverno.io/category: Networking
policies.kyverno.io/severity: medium
spec:
rules:
- name: default-deny
match:
any:
- resources:
kinds:
- Namespace
exclude:
any:
- resources:
namespaces:
- kube-system
- kyverno
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-all
namespace: "{{ request.object.metadata.name }}"
synchronize: true
data:
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Key policy settings (Bare Metal):
| Policy | Mode | Purpose |
|---|---|---|
restrict-image-registries |
Enforce | Blocks images from unknown registries |
enforce-pod-security-restricted |
Enforce | Prevents privileged pods and host namespace access |
require-resource-limits |
Enforce | Requires CPU and memory requests/limits on all pods |
require-labels |
Audit | Warns about missing standard labels (non-blocking) |
generate-default-deny-netpol |
Generate | Auto-creates default-deny NetworkPolicy in new namespaces |
Extra Manifests¶
Save the following additional resources for Bare Metal environments:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kyverno-alerts
namespace: kyverno
labels:
release: prometheus
spec:
groups:
- name: kyverno
rules:
- alert: KyvernoWebhookDown
expr: |
kube_deployment_status_replicas_available{
namespace="kyverno",
deployment="kyverno-admission-controller"
} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Kyverno admission controller has zero available replicas"
description: >
All Kyverno admission controller pods are down.
If the webhook failurePolicy is Fail, all resource creation is blocked.
- alert: KyvernoPolicyViolationRate
expr: |
sum(rate(kyverno_policy_results_total{rule_result="fail"}[5m])) > 1
for: 15m
labels:
severity: warning
annotations:
summary: "High Kyverno policy violation rate"
description: >
More than 1 policy violation per second for 15 minutes.
Review PolicyReport and ClusterPolicyReport for violations.
Commit and Deploy¶
Once all files are in place, commit and push to trigger Flux deployment:
Flux will detect the new commit and begin deploying Kyverno. To trigger an immediate sync instead of waiting for the next poll interval:
Verify¶
After Kyverno is deployed, confirm it is working:
# Confirm the admission webhook is registered
kubectl get validatingwebhookconfigurations | grep kyverno
Expected output includes:
On AWS, minimal policies are deployed (AWS uses base Kyverno without additional policies). Verify that the admission controller is responsive:
On Bare Metal, verify that all core policies are deployed and active:
# List all cluster policies
kubectl get clusterpolicies
# Expected output includes: restrict-image-registries,
# enforce-pod-security-restricted, require-resource-limits,
# require-labels, generate-default-deny-netpol
# Verify the policy reports
kubectl get clusterpolicyreports
# Test a policy violation — attempt to deploy an image from an
# unapproved registry (should be blocked)
kubectl run test-image --image=nginx:latest --restart=Never
# Expected: Error from server (Forbidden): admission webhook denied the request:
# Image 'nginx:latest' is not from an approved registry.
# Test the generate policy — create a new namespace and verify
# default-deny NetworkPolicy is auto-created
kubectl create namespace test-policy-gen
kubectl get networkpolicies -n test-policy-gen
# Expected: NAME POD-SELECTOR AGE
# default-deny-all <none> <1s>
# Cleanup
kubectl delete namespace test-policy-gen --ignore-not-found
kubectl delete pod test-image --ignore-not-found
On Bare Metal, verify that all core policies are deployed and active:
# List all cluster policies
kubectl get clusterpolicies
# Expected output includes: restrict-image-registries,
# enforce-pod-security-restricted, require-resource-limits,
# require-labels, generate-default-deny-netpol
# Verify the policy reports
kubectl get clusterpolicyreports
# Test a policy violation — attempt to deploy an image from an
# unapproved registry (should be blocked)
kubectl run test-image --image=nginx:latest --restart=Never
# Expected: Error from server (Forbidden): admission webhook denied the request:
# Image 'nginx:latest' is not from an approved registry.
# Test the generate policy — create a new namespace and verify
# default-deny NetworkPolicy is auto-created
kubectl create namespace test-policy-gen
kubectl get networkpolicies -n test-policy-gen
# Expected: NAME POD-SELECTOR AGE
# default-deny-all <none> <1s>
# Cleanup
kubectl delete namespace test-policy-gen --ignore-not-found
kubectl delete pod test-image --ignore-not-found
Flux Operations¶
This component is managed by Flux as HelmRelease kyverno and Kustomization infra-kyverno.
Check whether the HelmRelease and Kustomization are in a Ready state:
Trigger an immediate sync — pulls the latest Git revision and re-applies the manifests. Use after pushing config changes or to verify a fix:
Trigger a Helm upgrade — re-runs the Helm install/upgrade for this release without waiting for the next interval. Use when the HelmRelease values have changed:
View recent Flux controller logs for this release — useful for diagnosing why a sync or upgrade failed:
Recovering a stalled HelmRelease
If the HelmRelease shows Stalled with RetriesExceeded, Flux will not retry automatically. Suspend and resume to clear the failure counter, then reconcile:
flux suspend helmrelease kyverno -n flux-system
flux resume helmrelease kyverno -n flux-system
flux reconcile kustomization infra-kyverno -n flux-system
Only run this after confirming the underlying issue (e.g. pod crash, timeout) has been resolved. See Maintenance — Recovering Stalled Resources for details.
Next: Continue to 5.2.3 Vulnerability Scanning.
Next Steps¶
Proceed to the next layer of security: