3.4 Configure Storage¶
Each Talos node requires at least one disk for the operating system. Worker nodes typically have an additional data disk for persistent volumes (used by Rook-Ceph, Longhorn, or similar CSI drivers).
Volume Layout¶
| Volume | Purpose | Lifecycle |
|---|---|---|
| OS disk | Talos OS, STATE and EPHEMERAL partitions | Tied to node lifecycle |
| Data disk (optional) | Persistent volumes for workloads | Can persist independently |
Sizing Guidelines¶
| Role | OS Disk | Data Disk | Notes |
|---|---|---|---|
| Control plane | 50-100 GB | Optional (0-100 GB) | etcd data lives on OS disk |
| Worker | 50-100 GB | 100-500 GB | Size depends on workload storage needs |
The storage module (terraform/modules/aws/ebs) creates encrypted EBS gp3 volumes for Talos nodes. These are attached to EC2 instances by the compute module. This is deployed as part of terraform apply via terraform/cluster/aws/main.tf.
Step 1: Configure Storage Variables¶
Open terraform/cluster/envs/aws.tfvars and set the volume sizes:
root_volume_size = 50 # OS root volume per node (gp3)
control_plane_volume_size = 0 # Set to 0 to skip CP data volumes
worker_volume_size = 100 # Data volume per worker (gp3)
The variable defaults in terraform/cluster/aws/variables.tf are:
variable "root_volume_size" {
description = "OS root EBS volume size in GB for all nodes (gp3)"
type = number
default = 20
}
variable "control_plane_volume_size" {
description = "EBS data volume size in GB for control plane nodes (gp3). Set to 0 to skip."
type = number
default = 100
}
variable "worker_volume_size" {
description = "EBS data volume size in GB for worker nodes (gp3)"
type = number
default = 200
}
Note
Setting control_plane_volume_size = 0 skips creating separate data volumes for control plane nodes. The demo environment uses this to save cost:
Volume Layout Per Node¶
Each node has up to two EBS volumes:
| Volume | Device | Created In | Lifecycle |
|---|---|---|---|
| Root | /dev/xvda |
compute module (inline with EC2) |
Deleted with instance (delete_on_termination = true) |
| Data | /dev/xvdf |
ebs module (separate resource) |
Persists independently of instance |
Step 2: Understand the Module¶
The module is at terraform/modules/aws/ebs/.
Worker Volumes¶
One data volume per worker node, placed in the same AZ:
resource "aws_ebs_volume" "worker_nodes" {
count = var.worker_node_count
availability_zone = var.availability_zones[count.index % length(var.availability_zones)]
size = var.worker_volume_size
type = "gp3"
iops = 3000
throughput = 125
encrypted = true
tags = merge(local.common_tags, {
Name = "talos-worker-${count.index + 1}"
NodeType = "worker"
NodeIndex = count.index + 1
})
}
Key points:
availability_zone-- round-robin across configured AZs using modulo (count.index % length)type = "gp3"-- latest generation general purpose SSDiops = 3000/throughput = 125-- gp3 baseline performanceencrypted = true-- EBS encryption at rest using the default AWS-managed KMS key
Control Plane Volumes (Conditional)¶
CP volumes are only created when control_plane_volume_size > 0. This logic is handled in the root module:
module "storage" {
source = "../../modules/aws/ebs"
control_plane_count = var.control_plane_volume_size > 0 ? var.control_plane_count : 0
worker_node_count = var.worker_count
control_plane_volume_size = var.control_plane_volume_size > 0 ? var.control_plane_volume_size : 50
worker_volume_size = var.worker_volume_size
availability_zones = var.availability_zones
environment = var.environment
tags = local.tags
}
When control_plane_volume_size = 0, the control_plane_count passed to the module is 0, so no CP volumes are created.
Volume Attachment¶
The actual attachment of volumes to EC2 instances happens in the compute module (terraform/modules/aws/compute/main.tf), not in the ebs module. The storage module exports volume IDs that the compute module consumes:
resource "aws_volume_attachment" "worker" {
count = var.worker_count
device_name = "/dev/xvdf"
volume_id = var.worker_volume_ids[count.index]
instance_id = aws_instance.worker[count.index].id
}
Step 3: Module Outputs¶
The ebs module exports:
| Output | Consumed By |
|---|---|
control_plane_volume_ids |
compute module (volume attachments at /dev/xvdf) |
worker_volume_ids |
compute module (volume attachments at /dev/xvdf) |
total_storage_capacity_gb |
Informational (total provisioned capacity) |
Detailed volume information is available via:
# Volume IDs
terraform output -json worker_volume_ids
# Detailed volume info (size, IOPS, throughput, AZ, encryption status)
terraform output -json worker_volumes
Customisation Summary¶
| What to Change | Where | Variable |
|---|---|---|
| Root volume size | aws.tfvars |
root_volume_size |
| CP data volume size | aws.tfvars |
control_plane_volume_size (0 = none) |
| Worker data volume size | aws.tfvars |
worker_volume_size (minimum 50 GB) |
Warning
IOPS (3000) and throughput (125 MB/s) are hardcoded in the ebs module. To change these, edit terraform/modules/aws/ebs/main.tf directly.
Bare metal storage uses the server's physical disks. Talos partitions the install disk automatically.
Disk Layout¶
When Talos installs to a disk, it creates:
| Partition | Purpose | Size |
|---|---|---|
| EFI | Boot partition | 100 MB |
| BIOS | Legacy boot | 1 MB |
| BOOT | Talos boot assets | 1 GB |
| META | Machine metadata | 1 MB |
| STATE | Machine configuration, etcd data | 100 MB |
| EPHEMERAL | Kubernetes pods, logs, containerd | Remaining space |
Install Disk Selection¶
Specify the install disk in the Talos machine config. Common device paths:
| Device | Type |
|---|---|
/dev/sda |
SATA/SAS SSD or HDD |
/dev/nvme0n1 |
NVMe SSD |
/dev/vda |
VirtIO (unlikely on bare metal) |
Check available disks on a booted server:
Data Disks¶
Additional disks for persistent volumes are not managed by Talos. They are discovered by the CSI driver (Rook-Ceph) running in the cluster.
For Rook-Ceph: - Leave data disks unpartitioned and unformatted - Rook-Ceph discovers and provisions OSDs on them automatically
RAID Configuration¶
If your servers have hardware RAID controllers: - Configure RAID before booting Talos (via BIOS/UEFI or IPMI) - Present a single logical disk to Talos as the install target - For software RAID, Talos does not support mdadm — use hardware RAID or single disks
Proxmox storage is managed through Proxmox storage pools. The Terraform VM module creates virtual disks on the configured storage pool.
Storage Pool¶
All VM disks are created on the pool specified by storage_pool (default: local-lvm):
Verify the pool exists on your Proxmox node:
Disk Configuration¶
The VM module creates disks with these settings:
# From terraform/modules/proxmox/vm/main.tf
disk {
datastore_id = var.storage_pool
interface = "scsi0" # OS disk
size = var.disk_size_gb
iothread = true # Improved performance
discard = "on" # TRIM/discard support
}
Worker nodes get an optional data disk on scsi1:
dynamic "disk" {
for_each = var.data_disk_size_gb > 0 ? [1] : []
content {
datastore_id = coalesce(var.data_disk_storage, var.storage_pool)
interface = "scsi1" # Data disk
size = var.data_disk_size_gb
iothread = true
discard = "on"
file_format = "raw"
}
}
Proxmox Environment Sizes¶
| Node Type | OS Disk (scsi0) | Data Disk (scsi1) |
|---|---|---|
| Control plane | 50 GB | None |
| Worker | 50 GB | 50 GB |
These are set in envs/proxmox.tfvars:
Storage Backend Recommendations¶
| Backend | Type | Best For |
|---|---|---|
local-lvm |
LVM thin | Default, good performance |
local-zfs |
ZFS | Snapshots, compression, dedup |
| NFS/CIFS | Network | Shared storage (slower) |
For production, ZFS on local NVMe provides the best combination of performance and data protection.
Data Disk Usage¶
The data disk (scsi1) appears as /dev/sdb inside the VM. It is not managed by Talos — leave it for the CSI driver:
- Rook-Ceph: Discovers
/dev/sdband provisions OSDs automatically - Longhorn: Uses the disk for replica storage