Skip to content

4.3 Bootstrap the Cluster

With the machine configurations generated in 4.1 and Talos installed on all nodes in 4.2, the next step is to bootstrap the Kubernetes cluster.

Configure talosctl

Before bootstrapping, merge the generated talosconfig so talosctl can authenticate with the cluster.

This was completed in 4.2 Boot & Install — Configure talosctl. No additional steps needed.

# Merge the generated talosconfig into your local config
talosctl config merge _out/talosconfig

# Set the endpoint to the control plane VIP or first CP IP
talosctl config endpoint 192.168.30.30

# Set the node targets
talosctl config nodes 192.168.30.31 192.168.30.32 192.168.30.33 192.168.30.34 192.168.30.35 192.168.30.36
# Merge the talhelper-generated talosconfig into your local config
talosctl config merge clusterconfig/talosconfig

# Set the endpoint to the control plane IP (or VIP if configured)
talosctl config endpoint 192.168.30.31

# Set the node targets
talosctl config nodes 192.168.30.31 192.168.30.34 192.168.30.35 192.168.30.36

Info

Endpoints are the Talos API entry points — typically the control plane VIP or IP. Nodes are the targets for talosctl commands. When you run a command, talosctl connects to an endpoint, which proxies the request to the specified node.


Bootstrap the Cluster

Bootstrapping initialises the etcd cluster and starts the Kubernetes control plane. This is a one-time operation that should only be run against a single control plane node.

Warning

Only run the bootstrap command once on a single control plane node. Running it on multiple nodes or repeating it will cause etcd conflicts and leave the cluster in a broken state.

After merging the talosconfig and setting nodes in 4.2, bootstrap using the control plane node's private IP:

talosctl bootstrap --nodes 10.2.11.65

The endpoint (NLB) is already configured in your talosconfig. The --nodes flag targets the specific control plane instance by its private IP.

talosctl bootstrap --nodes 192.168.30.31
talosctl bootstrap --nodes 192.168.30.31

Retrieve the Kubeconfig

Once the cluster is bootstrapped, retrieve the kubeconfig so you can interact with Kubernetes via kubectl:

talosctl kubeconfig --nodes 10.2.11.65 --force

The --force flag overwrites any existing kubeconfig entry for this cluster. The kubeconfig will use the NLB DNS name as the server address (set via certSANs and endpoint in talconfig.yaml).

talhelper gencommand kubeconfig

Run the generated command to fetch and merge the kubeconfig into your local configuration. By default, the kubeconfig is written to ./kubeconfig.

Tip

Export KUBECONFIG to use the generated file directly:

export KUBECONFIG=$(pwd)/kubeconfig

Verify cluster access:

kubectl get nodes -o wide

Nodes will show NotReady until a CNI plugin is installed in the next step.


Install Cilium CNI

The cluster requires a Container Network Interface (CNI) plugin before pods can communicate.

Use the AWS bootstrap helmfile which installs the full infrastructure stack in dependency order:

cd apps/talos/aws
helmfile -f helmfile-bootstrap.yaml -e aws sync

This will:

  1. Install a minimal Cilium CNI (presync hook) for basic pod networking
  2. Install the AWS Cloud Controller Manager to initialise nodes with providerID and remove the uninitialized taint
  3. Install cert-manager for TLS certificate management
  4. Install the AWS Load Balancer Controller for NLB/ALB provisioning
  5. Upgrade Cilium to the full configuration (Hubble, Gateway API, Service Mesh, Ingress Controller)

Note

The AWS bootstrap differs from bare metal because the AWS Cloud Controller Manager must run before worker nodes become schedulable. The helmfile handles this ordering automatically.

Use the bootstrap helmfile to install Cilium and its dependencies:

cd apps/talos/
helmfile -f helmfile-bootstrap.yaml.gotmpl -e proxmox sync

This will:

  1. Install Cilium into kube-system with environment-specific values
  2. Wait for all Cilium pods and the operator to become ready
  3. Apply the L2 IPPool and announcement policy for LoadBalancer services
  4. Install cert-manager for TLS certificate management

The helmfile references an environment-specific values file for Cilium. Below is a template you can use as a starting point:

apps/infra/cilium/env/values.yaml
cluster:
  name: <cluster-name>

# Network interface used for L2 announcements and pod traffic
devices: eth0

# L2 announcements for LoadBalancer services
l2announcements:
  enabled: true
  interface: eth0
  leaseDuration: 15s
  leaseRenewDeadline: 5s
  leaseRetryPeriod: 1s

# Service mesh and ingress
serviceMesh:
  enabled: true
envoyConfig:
  enabled: true
ingressController:
  enabled: true
  loadbalancerMode: dedicated
loadBalancer:
  l7:
    backend: envoy

# WireGuard encryption (disabled by default)
encryption:
  enabled: false
  type: wireguard
  nodeEncryption: false
extraConfig:
  node-encryption-opt-out-labels: ""

# IPAM mode
ipam:
  mode: kubernetes

# Observability
hubble:
  ui:
    enabled: true
  relay:
    enabled: true
    peerService:
      port: 443
      targetPort: 4244
      internalTrafficPolicy: Cluster

# Replace kube-proxy with Cilium
kubeProxyReplacement: true

# Required capabilities for Talos
securityContext:
  capabilities:
    ciliumAgent:
      - CHOWN
      - KILL
      - NET_ADMIN
      - NET_RAW
      - IPC_LOCK
      - SYS_ADMIN
      - SYS_RESOURCE
      - DAC_OVERRIDE
      - FOWNER
      - SETGID
      - SETUID
    cleanCiliumState:
      - NET_ADMIN
      - SYS_ADMIN
      - SYS_RESOURCE

# Talos mounts cgroups itself
cgroup:
  autoMount:
    enabled: false
  hostRoot: /sys/fs/cgroup

# Talos KubePrism — per-node API server proxy
k8sServiceHost: localhost
k8sServicePort: 7445

# Gateway API support
gatewayAPI:
  enabled: true
  enableAlpn: true
  enableAppProtocol: true

Note

The bootstrap helmfile enforces dependency ordering — Cilium is installed first since all other components depend on a functional CNI.


Next Steps

Proceed to 4.4 Verify the Cluster to run a full set of health and connectivity checks before deploying workloads.