5.1.3 GitOps & Delivery¶
Flux provides GitOps-based continuous deployment, ensuring the cluster state matches the desired state declared in Git. Argo Rollouts extends Kubernetes Deployments with progressive delivery strategies (canary, blue-green) for safe application rollouts. Crossplane is a Kubernetes-native control plane framework that lets you manage external resources as custom resources.
How to use this page
Each component has an Install section showing the Flux HelmRelease, a Configuration section with Helm values, and a Verify section to confirm it is working.
All code blocks are labelled with their file path in the repository. Select your target environment (AWS or Bare Metal) in any tab group — the choice syncs across the entire page.
- Using the existing
rciis-devopsrepository: All files already exist. Skip themkdirandgit add/git commitcommands — they are for users building a new repository. Simply review the files, edit values for your environment, and push. - Building a new repository from scratch: Follow the
mkdir, file creation, andgitcommands in order. - No Git access: Expand the "Alternative: Helm CLI" block under each Install section.
Flux¶
Flux is the GitOps controller that reconciles the cluster state with this repository. It manages all infrastructure and application deployments through HelmReleases and Kustomizations. The installation includes support for SOPS-encrypted secret decryption and automatic reconciliation of Helm charts and Kustomize overlays.
Bootstrap¶
Create the cluster configuration directories:
Flux is installed via the flux bootstrap command, which:
- Creates the
flux-systemnamespace - Installs the Flux controllers (helm-controller, kustomize-controller, source-controller, notification-controller)
- Creates the git repository integration (SSH or HTTPS)
- Applies the initial cluster configuration from this repository
flux bootstrap github \
--owner=MagnaBC \
--repository=rciis-devops \
--branch=master \
--path=flux/clusters/aws \
--personal
This command performs a one-time bootstrap. Flux itself is then self-managed — updates to the Flux controllers are deployed via HelmReleases and Kustomizations in the same repository.
Alternative: Manual Installation
If using a private Git backend other than GitHub, install Flux manually:
Then manually configure the GitRepository source and Kustomization for cluster reconciliation.Directory Structure¶
Flux deployments are organized in a Git repository with the following structure:
flux/
├── clusters/aws/
│ ├── infrastructure.yaml # Entry point: ResourceSet defining deployment order
│ └── patches/ # Environment-specific patches and overrides
│
├── infra/
│ ├── base/ # Base HelmReleases (shared across environments)
│ │ ├── cert-manager.yaml
│ │ ├── cilium.yaml
│ │ ├── argo-rollouts.yaml
│ │ ├── crossplane.yaml
│ │ ├── prometheus.yaml
│ │ ├── loki.yaml
│ │ ├── velero.yaml
│ │ └── ...
│ │
│ └── aws/ # AWS environment overlays
│ ├── kustomization.yaml # Patches and values for AWS
│ └── {component}/
│ ├── patch.yaml # Environment-specific Helm values patch
│ └── values.yaml # Full values for AWS deployment
flux/clusters/aws/infrastructure.yaml— The main entry point. It references base Kustomizations and defines deployment order viadependsOnchains.flux/infra/base/— Base HelmRelease definitions, sourced from upstream Helm repositories.flux/infra/aws/— AWS environment overlays: Kustomize patches that customize values for the AWS deployment.- SOPS secrets — Encrypted secrets in
apps/infra/secrets/are decrypted at deployment time via Flux's native SOPS integration.
Verify¶
# List HelmReleases and their sync status
flux get helmrelease -n flux-system
flux get helmrelease --all-namespaces
# List Kustomizations and their sync status
flux get kustomization -n flux-system
flux get kustomization --all-namespaces
Flux Operations¶
Flux manages itself after bootstrap — its own controllers and configuration are reconciled from the same repository.
Check whether all Flux sources and kustomizations are in a Ready state:
Trigger an immediate sync of the Git source — pulls the latest commit without waiting for the next poll interval:
Trigger reconciliation of all Kustomizations — re-applies all manifests from the current Git revision:
View recent Flux controller logs — useful for diagnosing why a sync or upgrade failed:
Recovering a stalled Kustomization
If a Kustomization shows Stalled or NotReady, suspend and resume to clear the failure counter, then reconcile:
flux suspend kustomization <name> -n flux-system
flux resume kustomization <name> -n flux-system
flux reconcile kustomization <name> -n flux-system --with-source
Only run this after confirming the underlying issue (e.g. missing secret, invalid manifest) has been resolved. See Maintenance — Recovering Stalled Resources for details.
Next: Continue to Argo Rollouts below.
Argo Rollouts¶
Argo Rollouts extends Kubernetes Deployments with progressive delivery strategies including canary releases and blue-green deployments. It integrates with Gateway API and APISIX for traffic splitting during rollouts.
Install¶
The base HelmRelease tells Flux which chart to install. This file is shared across all environments — environment-specific settings are applied via patches (shown in the Configuration section).
Create the base directory and file:
| Field | Value | Explanation |
|---|---|---|
chart |
argo-rollouts |
The Helm chart name from the Argo repository |
version |
2.40.5 |
Pinned chart version — update this to upgrade Argo Rollouts |
sourceRef.name |
argo |
References a HelmRepository CR pointing to https://argoproj.github.io/argo-helm |
targetNamespace |
argo-rollouts |
Namespace where Argo Rollouts controller and dashboard are deployed |
crds: CreateReplace |
— | Automatically installs and updates Argo Rollouts CRDs |
remediation.retries |
3 |
Flux retries up to 3 times if the install or upgrade fails |
Save the following as flux/infra/base/argo-rollouts.yaml:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: argo-rollouts
namespace: flux-system
spec:
targetNamespace: argo-rollouts
interval: 30m
chart:
spec:
chart: argo-rollouts
version: "2.40.5"
sourceRef:
kind: HelmRepository
name: argo
namespace: flux-system
releaseName: argo-rollouts
install:
createNamespace: true
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
remediation:
retries: 3
Alternative: Helm CLI
If you do not have Git access, install Argo Rollouts directly:
Configuration¶
The environment patch overrides the base HelmRelease with cluster-specific settings. The values file controls how Argo Rollouts behaves. Select your environment below.
Create the environment overlay directory:
Environment Patch¶
The patch file reduces resource allocation for the AWS environment. Save the
following as flux/infra/aws/argo-rollouts/patch.yaml:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: argo-rollouts
spec:
values:
controller:
replicas: 1
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
dashboard:
replicas: 1
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 25m
memory: 32Mi
| Setting | Value | Why |
|---|---|---|
controller.replicas |
1 |
Single replica for cost efficiency on AWS |
dashboard.replicas |
1 |
Single dashboard replica to match controller count |
controller.resources.limits |
CPU: 250m, Memory: 128Mi | Conservative limits for AWS environment |
dashboard.resources.limits |
CPU: 50m, Memory: 64Mi | Minimal overhead for dashboard |
Helm Values¶
# Argo Rollouts — AWS HA configuration
# 2 controller replicas, dashboard, APISIX + nginx provider RBAC
controller:
replicas: 2
component: rollouts-controller
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
nodeTaintsPolicy: Honor
labelSelector:
matchLabels:
app.kubernetes.io/component: rollouts-controller
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
dashboard:
enabled: true
readonly: true
replicas: 2
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
nodeTaintsPolicy: Honor
labelSelector:
matchLabels:
app.kubernetes.io/component: rollouts-dashboard
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: ""
additionalLabels:
release: prometheus
interval: 30s
providerRBAC:
enabled: true
providers:
nginx: true
istio: false
smi: false
ambassador: false
awsLoadBalancerController: false
apisix: true
installCRDs: true
clusterInstall: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# Argo Rollouts — AWS Non-HA configuration
# Single controller, dashboard enabled but single replica
controller:
replicas: 1
component: rollouts-controller
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
dashboard:
enabled: true
readonly: true
replicas: 1
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 25m
memory: 32Mi
metrics:
enabled: true
serviceMonitor:
enabled: false
providerRBAC:
enabled: true
providers:
nginx: true
istio: false
smi: false
ambassador: false
awsLoadBalancerController: false
apisix: true
installCRDs: true
clusterInstall: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
Create the environment overlay directory:
Environment patch
No environment patch needed — base configuration is used. Bare Metal uses the full HA or Non-HA values configuration defined below.
Helm Values¶
# Argo Rollouts — Bare Metal HA configuration
# 2 controller replicas, dashboard, APISIX + nginx provider RBAC
controller:
replicas: 2
component: rollouts-controller
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
nodeTaintsPolicy: Honor
labelSelector:
matchLabels:
app.kubernetes.io/component: rollouts-controller
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
dashboard:
enabled: true
readonly: true
replicas: 2
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
nodeTaintsPolicy: Honor
labelSelector:
matchLabels:
app.kubernetes.io/component: rollouts-dashboard
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: ""
additionalLabels:
release: prometheus
interval: 30s
providerRBAC:
enabled: true
providers:
nginx: true
istio: false
smi: false
ambassador: false
awsLoadBalancerController: false
apisix: true
installCRDs: true
clusterInstall: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# Argo Rollouts — Bare Metal Non-HA configuration
# Single controller, dashboard enabled but single replica
controller:
replicas: 1
component: rollouts-controller
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
dashboard:
enabled: true
readonly: true
replicas: 1
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 25m
memory: 32Mi
metrics:
enabled: true
serviceMonitor:
enabled: false
providerRBAC:
enabled: true
providers:
nginx: true
istio: false
smi: false
ambassador: false
awsLoadBalancerController: false
apisix: true
installCRDs: true
clusterInstall: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
Create the environment overlay directory:
Environment patch
No environment patch needed — base configuration is used. Bare Metal uses the full HA or Non-HA values configuration defined below.
Helm Values¶
# Argo Rollouts — Bare Metal HA configuration
# 2 controller replicas, dashboard, APISIX + nginx provider RBAC
controller:
replicas: 2
component: rollouts-controller
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
nodeTaintsPolicy: Honor
labelSelector:
matchLabels:
app.kubernetes.io/component: rollouts-controller
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
dashboard:
enabled: true
readonly: true
replicas: 2
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
nodeTaintsPolicy: Honor
labelSelector:
matchLabels:
app.kubernetes.io/component: rollouts-dashboard
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: ""
additionalLabels:
release: prometheus
interval: 30s
providerRBAC:
enabled: true
providers:
nginx: true
istio: false
smi: false
ambassador: false
awsLoadBalancerController: false
apisix: true
installCRDs: true
clusterInstall: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# Argo Rollouts — Bare Metal Non-HA configuration
# Single controller, dashboard enabled but single replica
controller:
replicas: 1
component: rollouts-controller
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
dashboard:
enabled: true
readonly: true
replicas: 1
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 25m
memory: 32Mi
metrics:
enabled: true
serviceMonitor:
enabled: false
providerRBAC:
enabled: true
providers:
nginx: true
istio: false
smi: false
ambassador: false
awsLoadBalancerController: false
apisix: true
installCRDs: true
clusterInstall: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
Key settings (all environments):
| Setting | HA | Non-HA | Why |
|---|---|---|---|
controller.replicas |
2 |
1 |
HA spreads load across nodes; Non-HA uses single replica |
dashboard.enabled |
true |
true |
Dashboard is enabled for rollout visibility in both modes |
metrics.serviceMonitor.enabled |
true |
false |
HA enables Prometheus integration; Non-HA disables it |
providerRBAC.providers.apisix |
true |
true |
APISIX traffic splitting is supported in both modes |
providerRBAC.providers.gatewayAPI |
true |
true |
Gateway API traffic splitting is supported in both modes |
Commit and Deploy¶
Once all files are in place, commit and push to trigger Flux deployment:
Flux will detect the new commit and begin deploying Argo Rollouts. To trigger an immediate sync instead of waiting for the next poll interval:
Verify¶
# Check Rollouts controller
kubectl get pods -n argo-rollouts
# Verify CRDs
kubectl get crd rollouts.argoproj.io
# Access the dashboard (port-forward)
kubectl port-forward -n argo-rollouts svc/argo-rollouts-dashboard 3100:3100
# List any active rollouts
kubectl get rollouts -A
Flux Operations¶
This component is managed by Flux as HelmRelease argo-rollouts and Kustomization infra-argo-rollouts.
Check whether the HelmRelease and Kustomization are in a Ready state:
Trigger an immediate sync — pulls the latest Git revision and re-applies the manifests. Use after pushing config changes or to verify a fix:
Trigger a Helm upgrade — re-runs the Helm install/upgrade for this release without waiting for the next interval. Use when the HelmRelease values have changed:
View recent Flux controller logs for this release — useful for diagnosing why a sync or upgrade failed:
Recovering a stalled HelmRelease
If the HelmRelease shows Stalled with RetriesExceeded, Flux will not retry automatically. Suspend and resume to clear the failure counter, then reconcile:
flux suspend helmrelease argo-rollouts -n flux-system
flux resume helmrelease argo-rollouts -n flux-system
flux reconcile kustomization infra-argo-rollouts -n flux-system
Only run this after confirming the underlying issue (e.g. pod crash, timeout) has been resolved. See Maintenance — Recovering Stalled Resources for details.
Crossplane¶
Crossplane is a Kubernetes-native control plane framework that lets you manage external
resources as custom resources. The RCIIS platform uses Crossplane with
provider-keycloak to manage Keycloak realms, clients, and federation as CRDs — enabling
fully GitOps-managed identity configuration.
Install¶
The base HelmRelease tells Flux which chart to install. This file is shared across all environments — environment-specific settings are applied via patches (shown in the Configuration section).
Create the base directory and file:
| Field | Value | Explanation |
|---|---|---|
chart |
crossplane |
The Helm chart name from the Crossplane Stable repository |
version |
2.2.0 |
Pinned chart version — update this to upgrade Crossplane |
sourceRef.name |
crossplane-stable |
References a HelmRepository CR pointing to https://charts.crossplane.io/stable |
targetNamespace |
crossplane-system |
Namespace where Crossplane core and providers are deployed |
crds: CreateReplace |
— | Automatically installs and updates Crossplane CRDs |
remediation.retries |
3 |
Flux retries up to 3 times if the install or upgrade fails |
Save the following as flux/infra/base/crossplane.yaml:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: crossplane
namespace: flux-system
spec:
targetNamespace: crossplane-system
interval: 30m
chart:
spec:
chart: crossplane
version: "2.2.0"
sourceRef:
kind: HelmRepository
name: crossplane-stable
namespace: flux-system
releaseName: crossplane
install:
createNamespace: true
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
remediation:
retries: 3
Alternative: Helm CLI
If you do not have Git access, install Crossplane directly:
Configuration¶
The environment patch overrides the base HelmRelease with cluster-specific settings. The values file controls how Crossplane behaves. Select your environment below.
Create the environment overlay directory:
Environment Patch¶
The patch file reduces resource allocation for the AWS environment. Save the
following as flux/infra/aws/crossplane/patch.yaml:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: crossplane
spec:
values:
replicas: 1
resourcesCrossplane:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
resourcesRBACManager:
limits:
cpu: 50m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
| Setting | Value | Why |
|---|---|---|
replicas |
1 |
Single replica for cost efficiency on AWS |
resourcesCrossplane.limits |
CPU: 250m, Memory: 512Mi | Conservative limits for AWS environment |
resourcesRBACManager.limits |
CPU: 50m, Memory: 256Mi | Minimal overhead for RBAC manager |
Helm Values¶
# Crossplane — AWS HA configuration
# 2 replicas, RBAC manager, metrics, webhooks, Keycloak provider
replicas: 2
resourcesCrossplane:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
resourcesRBACManager:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
metrics:
enabled: true
webhooks:
enabled: true
rbacManager:
deploy: true
replicas: 1
securityContextCrossplane:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContextRBACManager:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
# Bootstrap the Keycloak provider package on install
provider:
packages:
- xpkg.upbound.io/crossplane-contrib/provider-keycloak:v2.14.0
# Crossplane — AWS Non-HA configuration
# Single replica, reduced resources, same Keycloak provider
replicas: 1
resourcesCrossplane:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
resourcesRBACManager:
limits:
cpu: 50m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
metrics:
enabled: true
webhooks:
enabled: true
rbacManager:
deploy: true
replicas: 1
securityContextCrossplane:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContextRBACManager:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
# Bootstrap the Keycloak provider package on install
provider:
packages:
- xpkg.upbound.io/crossplane-contrib/provider-keycloak:v2.14.0
Create the environment overlay directory:
Environment patch
No environment patch needed — base configuration is used. Bare Metal uses the full HA or Non-HA values configuration defined below.
Helm Values¶
# Crossplane — Bare Metal HA configuration
# 2 replicas, RBAC manager, metrics, webhooks, Keycloak provider
replicas: 2
resourcesCrossplane:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
resourcesRBACManager:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
metrics:
enabled: true
webhooks:
enabled: true
rbacManager:
deploy: true
replicas: 1
securityContextCrossplane:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContextRBACManager:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
# Bootstrap the Keycloak provider package on install
provider:
packages:
- xpkg.upbound.io/crossplane-contrib/provider-keycloak:v2.14.0
# Crossplane — Bare Metal Non-HA configuration
# Single replica, reduced resources, same Keycloak provider
replicas: 1
resourcesCrossplane:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
resourcesRBACManager:
limits:
cpu: 50m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
metrics:
enabled: true
webhooks:
enabled: true
rbacManager:
deploy: true
replicas: 1
securityContextCrossplane:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContextRBACManager:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
# Bootstrap the Keycloak provider package on install
provider:
packages:
- xpkg.upbound.io/crossplane-contrib/provider-keycloak:v2.14.0
Create the environment overlay directory:
Environment patch
No environment patch needed — base configuration is used. Bare Metal uses the full HA or Non-HA values configuration defined below.
Helm Values¶
# Crossplane — Bare Metal HA configuration
# 2 replicas, RBAC manager, metrics, webhooks, Keycloak provider
replicas: 2
resourcesCrossplane:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
resourcesRBACManager:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
metrics:
enabled: true
webhooks:
enabled: true
rbacManager:
deploy: true
replicas: 1
securityContextCrossplane:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContextRBACManager:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
# Bootstrap the Keycloak provider package on install
provider:
packages:
- xpkg.upbound.io/crossplane-contrib/provider-keycloak:v2.14.0
# Crossplane — Bare Metal Non-HA configuration
# Single replica, reduced resources, same Keycloak provider
replicas: 1
resourcesCrossplane:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
resourcesRBACManager:
limits:
cpu: 50m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
metrics:
enabled: true
webhooks:
enabled: true
rbacManager:
deploy: true
replicas: 1
securityContextCrossplane:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContextRBACManager:
runAsUser: 65532
runAsGroup: 65532
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
# Bootstrap the Keycloak provider package on install
provider:
packages:
- xpkg.upbound.io/crossplane-contrib/provider-keycloak:v2.14.0
Key settings (all environments):
| Setting | HA | Non-HA | Why |
|---|---|---|---|
replicas |
2 |
1 |
HA spreads load across nodes; Non-HA uses single replica |
metrics.enabled |
true |
true |
Metrics are enabled for all deployments |
webhooks.enabled |
true |
true |
Webhooks are required for provider validation in both modes |
provider.packages |
Keycloak v2.14.0 | Keycloak v2.14.0 | Same provider version in all modes for consistency |
Commit and Deploy¶
Once all files are in place, commit and push to trigger Flux deployment:
Flux will detect the new commit and begin deploying Crossplane. To trigger an immediate sync instead of waiting for the next poll interval:
Verify¶
# Check Crossplane core pod
kubectl get pods -n crossplane-system
# Check installed providers
kubectl get providers.pkg.crossplane.io
# Verify provider health
kubectl get providerrevisions.pkg.crossplane.io
# Check that Keycloak CRDs are available
kubectl get crds | grep keycloak.crossplane.io
Flux Operations¶
This component is managed by Flux as HelmRelease crossplane and Kustomization infra-crossplane.
Check whether the HelmRelease and Kustomization are in a Ready state:
Trigger an immediate sync — pulls the latest Git revision and re-applies the manifests. Use after pushing config changes or to verify a fix:
Trigger a Helm upgrade — re-runs the Helm install/upgrade for this release without waiting for the next interval. Use when the HelmRelease values have changed:
View recent Flux controller logs for this release — useful for diagnosing why a sync or upgrade failed:
Keycloak Provider Usage
Once Crossplane and provider-keycloak are healthy, see
Realm, Client & Federation Configuration
for how to manage Keycloak resources declaratively via Crossplane CRDs.
Recovering a stalled HelmRelease
If the HelmRelease shows Stalled with RetriesExceeded, Flux will not retry automatically. Suspend and resume to clear the failure counter, then reconcile:
flux suspend helmrelease crossplane -n flux-system
flux resume helmrelease crossplane -n flux-system
flux reconcile kustomization infra-crossplane -n flux-system
Only run this after confirming the underlying issue (e.g. pod crash, timeout) has been resolved. See Maintenance — Recovering Stalled Resources for details.
Next Steps¶
GitOps and progressive delivery are now configured. Proceed to 5.1.4 Storage to set up persistent storage and data management.