4.3 Bootstrap the Cluster¶
With the machine configurations generated in 4.1 and Talos installed on all nodes in 4.2, the next step is to bootstrap the Kubernetes cluster.
Configure talosctl¶
Before bootstrapping, merge the generated talosconfig so talosctl can authenticate with the cluster.
This was completed in 4.2 Boot & Install — Configure talosctl. No additional steps needed.
# Merge the generated talosconfig into your local config
talosctl config merge _out/talosconfig
# Set the endpoint to the control plane VIP or first CP IP
talosctl config endpoint 192.168.30.30
# Set the node targets
talosctl config nodes 192.168.30.31 192.168.30.32 192.168.30.33 192.168.30.34 192.168.30.35 192.168.30.36
# Merge the talhelper-generated talosconfig into your local config
talosctl config merge clusterconfig/talosconfig
# Set the endpoint to the control plane IP (or VIP if configured)
talosctl config endpoint 192.168.30.31
# Set the node targets
talosctl config nodes 192.168.30.31 192.168.30.34 192.168.30.35 192.168.30.36
Info
Endpoints are the Talos API entry points — typically the control plane VIP or IP. Nodes are the targets for talosctl commands. When you run a command, talosctl connects to an endpoint, which proxies the request to the specified node.
Bootstrap the Cluster¶
Bootstrapping initialises the etcd cluster and starts the Kubernetes control plane. This is a one-time operation that should only be run against a single control plane node.
Warning
Only run the bootstrap command once on a single control plane node. Running it on multiple nodes or repeating it will cause etcd conflicts and leave the cluster in a broken state.
After merging the talosconfig and setting nodes in 4.2, bootstrap using the control plane node's private IP:
The endpoint (NLB) is already configured in your talosconfig. The --nodes flag targets the specific control plane instance by its private IP.
Retrieve the Kubeconfig¶
Once the cluster is bootstrapped, retrieve the kubeconfig so you can interact with Kubernetes via kubectl:
The --force flag overwrites any existing kubeconfig entry for this cluster. The kubeconfig will use the NLB DNS name as the server address (set via certSANs and endpoint in talconfig.yaml).
Verify cluster access:
Nodes will show NotReady until a CNI plugin is installed in the next step.
Install Cilium CNI¶
The cluster requires a Container Network Interface (CNI) plugin before pods can communicate.
Use the AWS bootstrap helmfile which installs the full infrastructure stack in dependency order:
This will:
- Install a minimal Cilium CNI (presync hook) for basic pod networking
- Install the AWS Cloud Controller Manager to initialise nodes with
providerIDand remove theuninitializedtaint - Install cert-manager for TLS certificate management
- Install the AWS Load Balancer Controller for NLB/ALB provisioning
- Upgrade Cilium to the full configuration (Hubble, Gateway API, Service Mesh, Ingress Controller)
Note
The AWS bootstrap differs from bare metal because the AWS Cloud Controller Manager must run before worker nodes become schedulable. The helmfile handles this ordering automatically.
Use the bootstrap helmfile to install Cilium and its dependencies:
This will:
- Install Cilium into
kube-systemwith environment-specific values - Wait for all Cilium pods and the operator to become ready
- Apply the L2 IPPool and announcement policy for LoadBalancer services
- Install cert-manager for TLS certificate management
The helmfile references an environment-specific values file for Cilium. Below is a template you can use as a starting point:
cluster:
name: <cluster-name>
# Network interface used for L2 announcements and pod traffic
devices: eth0
# L2 announcements for LoadBalancer services
l2announcements:
enabled: true
interface: eth0
leaseDuration: 15s
leaseRenewDeadline: 5s
leaseRetryPeriod: 1s
# Service mesh and ingress
serviceMesh:
enabled: true
envoyConfig:
enabled: true
ingressController:
enabled: true
loadbalancerMode: dedicated
loadBalancer:
l7:
backend: envoy
# WireGuard encryption (disabled by default)
encryption:
enabled: false
type: wireguard
nodeEncryption: false
extraConfig:
node-encryption-opt-out-labels: ""
# IPAM mode
ipam:
mode: kubernetes
# Observability
hubble:
ui:
enabled: true
relay:
enabled: true
peerService:
port: 443
targetPort: 4244
internalTrafficPolicy: Cluster
# Replace kube-proxy with Cilium
kubeProxyReplacement: true
# Required capabilities for Talos
securityContext:
capabilities:
ciliumAgent:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
cleanCiliumState:
- NET_ADMIN
- SYS_ADMIN
- SYS_RESOURCE
# Talos mounts cgroups itself
cgroup:
autoMount:
enabled: false
hostRoot: /sys/fs/cgroup
# Talos KubePrism — per-node API server proxy
k8sServiceHost: localhost
k8sServicePort: 7445
# Gateway API support
gatewayAPI:
enabled: true
enableAlpn: true
enableAppProtocol: true
Note
The bootstrap helmfile enforces dependency ordering — Cilium is installed first since all other components depend on a functional CNI.
Next Steps¶
Proceed to 4.4 Verify the Cluster to run a full set of health and connectivity checks before deploying workloads.