Complete Proxmox VE Setup Guide 2026: From Bare Metal to Production Homelab


If you have been looking for a proxmox setup guide that takes you from a blank machine to a fully running homelab, you are in the right place. Proxmox Virtual Environment (VE) is the open-source hypervisor that has become the de facto standard for self-hosters and homelab enthusiasts, and for good reason: it combines KVM-based virtual machines, LXC containers, ZFS storage, software-defined networking, and a polished web UI into a single, free platform.

This pillar guide walks you through every stage of a proxmox homelab deployment. We will cover hardware selection, the proxmox ve installation process, post-install configuration, storage layout decisions, your first VM and container, backups, GPU passthrough, and security hardening. By the end, you will have a production-grade virtualization host running in your home.

What Is Proxmox VE and Why Choose It?

Proxmox VE is a Debian-based server virtualization platform developed by Proxmox Server Solutions GmbH. The current stable release is Proxmox VE 8.4, built on Debian 12.10 “Bookworm” with Linux kernel 6.8 (and kernel 6.14 available as opt-in), QEMU 9.2, LXC 6.0, and ZFS 2.2.7.

What sets Proxmox apart from alternatives like VMware ESXi or Microsoft Hyper-V is its combination of power and accessibility:

  • Completely free to use. There is no feature-gated free tier. Every capability is available without a subscription. Paid support subscriptions exist for those who want them, but the software itself is fully functional out of the box.
  • Dual virtualization stack. Run full KVM virtual machines for workloads that need a complete OS, and lightweight LXC containers for services that only need an isolated Linux userspace.
  • Integrated web UI. Manage everything from a browser-based dashboard on port 8006 --- no separate vCenter or management server required.
  • Built-in ZFS support. Create mirrors, RAIDZ pools, and snapshots without any additional software.
  • Clustering and high availability. Link multiple nodes into a cluster with live migration, fencing, and HA group policies.
  • Mature backup ecosystem. Proxmox Backup Server (PBS) provides incremental, deduplicated, and optionally encrypted backups that integrate directly with the Proxmox VE UI.

For a homelab, this means you get enterprise-grade virtualization on commodity hardware with zero licensing cost.

Choosing the Right Hardware for Your Proxmox Homelab

Before we touch software, you need hardware. The homelab community has converged on a few popular form factors, each with trade-offs worth understanding.

Hardware Comparison Table

HardwareCPU (Typical Config)Max RAMStorage SlotsNetworkingIdle PowerPrice Range (Used/New)
Minisforum MS-01Intel i9-13900H (14C/20T)96 GB DDR52x M.2 NVMe + 1x 2.5” SATA2x 10GbE SFP+ + 1x 2.5GbE~15 W$550—$900
Minisforum MS-02Intel Core Ultra 9 (16C/22T)96 GB DDR52x M.2 NVMe + PCIe x16 slot2x 10GbE SFP+ + 1x 2.5GbE~18 W$700—$1,200
Dell OptiPlex 7060/7080 MicroIntel i5-8500T / i7-10700T (6C/8C)64 GB DDR41x M.2 NVMe + 1x 2.5” SATA1x 1GbE~8 W$80—$200 (refurb)
HP EliteDesk 800 G5/G9 MiniIntel i5-9500T / i5-12500T (6C)64 GB DDR4/DDR51—2x M.2 NVMe + 1x 2.5” SATA1x 1GbE (G9: FlexIO 2.5/10GbE)~10 W$100—$350 (refurb)
Intel NUC 13 ProIntel i7-1360P (12C/16T)64 GB DDR41x M.2 NVMe + 1x 2.5” SATA1x 2.5GbE + Thunderbolt 4~10 W$400—$650

Recommendations by Budget

Budget build ($100—$250). A refurbished Dell OptiPlex Micro or HP EliteDesk Mini is hard to beat. These Tiny/Mini/Micro (TMM) machines are plentiful on the secondary market because enterprises refresh them constantly. Add 32—64 GB of DDR4 SO-DIMM and a 1 TB NVMe drive and you have a surprisingly capable single-node Proxmox host for under $250 all in. The HP EliteDesk 800 G5 Mini is a particular favorite because it has two M.2 slots, allowing a ZFS mirror on NVMe for your boot/VM storage.

Mid-range build ($500—$900). The Minisforum MS-01 is arguably the most popular dedicated homelab mini PC on the market right now. Its dual SFP+ 10 GbE ports make it ideal for Ceph clusters, high-throughput NFS, or iSCSI storage. Two NVMe slots plus a 2.5-inch bay give you flexibility for a ZFS mirror boot pool and a separate data drive. The Intel NUC 13 Pro is also excellent here if you value Thunderbolt 4 expansion.

High-end build ($900+). The Minisforum MS-02 adds a full-length PCIe x16 slot, which opens the door to discrete GPU passthrough for local AI inference or game streaming. If your homelab goals include running Ollama, llama.cpp, or a Plex transcoding GPU, this is the form factor to target.

For a general-purpose Proxmox homelab node, aim for at least:

  • CPU: 4 cores / 8 threads (Intel 8th gen+ or AMD Ryzen 3000+)
  • RAM: 32 GB (16 GB is workable but tight once you run a handful of VMs and containers)
  • Storage: 500 GB NVMe SSD (ideally two drives for a ZFS mirror)
  • Networking: Gigabit Ethernet minimum; 2.5 GbE or 10 GbE preferred

ECC memory is nice to have for ZFS but is absolutely not required. Proxmox and ZFS run perfectly well on non-ECC consumer RAM.

Creating the Proxmox VE USB Installer

Download the latest Proxmox VE ISO from the official download page at proxmox.com/en/downloads. At the time of writing, the current version is Proxmox VE 8.4.

On Linux or macOS

Identify your USB drive (be very careful to pick the right device):

# Linux
lsblk

# macOS
diskutil list

Write the ISO to the drive:

sudo dd bs=4M if=proxmox-ve_8.4-1.iso of=/dev/sdX status=progress
sync

Replace /dev/sdX with your actual USB device path. On macOS, use /dev/rdiskN (the raw device) for significantly faster writes.

On Windows

Use Rufus or balenaEtcher. In Rufus, select “DD Image” mode when prompted --- this ensures the Proxmox bootloader is written correctly.

Proxmox VE Installation Walkthrough

Boot from the USB drive. You may need to press F2, F10, F12, or Del during POST to access the boot menu, depending on your hardware.

Step 1: Accept the EULA

The installer opens with the Proxmox VE license agreement. Click “I agree” to proceed.

Step 2: Select the Target Disk

This is where you choose your installation disk and, critically, your filesystem. The installer presents a dropdown with the following options:

  • ext4 --- Traditional Linux filesystem. Simple and well-understood.
  • xfs --- High-performance journaling filesystem. Good for large sequential writes.
  • ZFS (RAID0, RAID1, RAIDZ1, RAIDZ2, RAIDZ3) --- Advanced copy-on-write filesystem with built-in redundancy. This is the recommended choice for most homelabs.

If you have two NVMe drives, select ZFS (RAID1) to create a mirrored boot pool. This gives you redundancy for the OS and any VMs/containers stored on the root pool. We will discuss storage strategy in detail later.

Click “Options” to adjust:

  • ashift: Leave at 12 for most SSDs. Use 13 for 8K sector drives.
  • compress: Set to lz4 for a good balance of CPU usage and space savings.
  • checksum: Leave at on.
  • hdsize: You can limit how much of the disk Proxmox uses, which is useful if you want to reserve space for other ZFS datasets later.

Step 3: Set Location and Timezone

Select your country, timezone, and keyboard layout.

Step 4: Set the Root Password and Email

Choose a strong root password. The email address is used for system notifications (e.g., ZFS scrub results, backup failures). Use a real address.

Step 5: Configure Networking

The installer will auto-detect your network interface. Configure:

  • Management Interface: Select your primary NIC.
  • Hostname (FQDN): Use a fully qualified name like pve.homelab.local.
  • IP Address: Assign a static IP on your LAN (e.g., 192.168.1.100/24).
  • Gateway: Your router’s IP (e.g., 192.168.1.1).
  • DNS Server: Your router, Pi-hole, or a public resolver like 1.1.1.1.

Step 6: Install

Review the summary and click “Install.” The process takes 3—10 minutes depending on your storage speed. When it finishes, remove the USB drive and reboot.

Step 7: Access the Web UI

Open a browser and navigate to:

https://192.168.1.100:8006

Log in with username root and the password you set during installation. You will see a certificate warning --- this is normal and expected because Proxmox uses a self-signed certificate by default.

Post-Install Configuration

A fresh Proxmox install needs a few adjustments before it is ready for production use.

Switch to the No-Subscription Repository

By default, Proxmox is configured to use the enterprise repository, which requires a paid subscription. For homelab use, switch to the free no-subscription repository.

In the web UI, navigate to your node, then Updates > Repositories:

  1. Disable the pve-enterprise repository.
  2. Add the No-Subscription repository.

Or do it from the command line:

# Disable the enterprise repo
sed -i 's/^deb/# deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Add the no-subscription repo
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

# Update and upgrade
apt update && apt full-upgrade -y

Remove the Subscription Nag Dialog

When you log into the web UI without an active subscription, Proxmox displays a pop-up reminder. You can remove this with the following one-liner:

sed -Ezi.bak "s/(function\(orig_cmd\) \{)/\1\n\torig_cmd();\n\treturn;/" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
systemctl restart pveproxy.service

Note that this change will be overwritten when the proxmox-widget-toolkit package is updated, so you may need to reapply it after major upgrades.

Alternatively, the community pve-nag-buster script uses a dpkg hook to automatically reapply the patch after updates.

Install Useful Utilities

apt install -y vim htop iotop net-tools curl wget gnupg lsb-release

Configure NTP

Accurate time is critical for clustering, Ceph, and certificate validation:

timedatectl set-ntp true
timedatectl status

Proxmox uses chrony by default. Verify it is running:

chronyc tracking

Networking Setup

Proxmox creates a Linux bridge (vmbr0) during installation that connects your VMs and containers to the physical network. For most single-node homelabs, this default bridge is sufficient.

Understanding the Default Bridge

Check your current network configuration:

cat /etc/network/interfaces

You will see something like:

auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 192.168.1.100/24
    gateway 192.168.1.1
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0

The physical NIC (enp1s0) is enslaved to the bridge (vmbr0). All VMs and containers that use vmbr0 will appear as devices on your LAN, receiving IPs from your router’s DHCP server.

Adding a Second Network for Internal-Only Traffic

If you want an isolated network for inter-VM communication (useful for database backends, Docker overlay networks, or lab experiments):

# Add to /etc/network/interfaces
auto vmbr1
iface vmbr1 inet static
    address 10.10.10.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -o vmbr0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s 10.10.10.0/24 -o vmbr0 -j MASQUERADE

This creates a NATed internal network. VMs on vmbr1 can reach the internet through the host but are not directly accessible from the LAN (unless you add port forwarding rules).

Apply the changes:

ifreload -a

VLAN Tagging

If your switch supports VLANs, Proxmox bridges can tag traffic natively. Enable “VLAN Aware” on vmbr0 in the web UI (or add bridge-vlan-aware yes to the config), then assign VLAN tags per VM NIC.

Storage Configuration: ZFS vs LVM

Storage is one of the most important decisions you will make. Proxmox supports multiple storage backends, but the two most common for local disks are ZFS and LVM-Thin.

ZFS is a combined filesystem and volume manager that provides:

  • End-to-end checksumming detects and corrects silent data corruption (bit rot).
  • Copy-on-write snapshots are instant and space-efficient.
  • Built-in compression (LZ4 or ZSTD) saves space with minimal CPU overhead.
  • Native replication allows scheduled incremental snapshot shipping to a second node.
  • Pool portability --- you can physically move drives to another host and import the pool.

Create a ZFS pool from additional drives after installation:

# Mirror pool from two drives
zpool create -f -o ashift=12 tank mirror /dev/sda /dev/sdb

# Enable compression
zfs set compression=lz4 tank

# Create a dataset for VM disks
zfs create tank/vm-disks

# Add as Proxmox storage
pvesm add zfspool tank-storage -pool tank/vm-disks

Important: Do not fill a ZFS pool beyond 80% capacity. Performance degrades significantly as ZFS needs free space for copy-on-write operations, especially with fragmented pools.

LVM-Thin: The Lightweight Alternative

LVM-Thin provisioning offers:

  • Lower memory overhead than ZFS (no ARC cache).
  • Thin provisioning --- disks are allocated on demand, not upfront.
  • Simpler setup on single-disk systems.

However, LVM-Thin lacks checksumming, built-in compression, and native replication. If you have a single drive and limited RAM (16 GB or less), LVM-Thin is a pragmatic choice. For everything else, ZFS is the stronger option.

PoolDrivesFilesystemPurpose
rpoolNVMe 1 + NVMe 2 (mirror)ZFS RAID1Proxmox OS, VM disks, container rootfs
External HDD/NASUSB or NFS mountext4/XFSBackups (vzdump), ISO storage

If you have three or more drives, consider separating the OS pool from the data pool:

PoolDrivesFilesystemPurpose
rpoolNVMe 1 + NVMe 2 (mirror)ZFS RAID1Proxmox OS
tankSATA SSD 1 + SATA SSD 2 (mirror)ZFS RAID1VM disks, container rootfs
NAS shareNFS/SMBN/ABackups, ISOs, templates

Creating Your First Virtual Machine

Let us create an Ubuntu Server VM as a practical example.

Step 1: Upload an ISO

Navigate to your storage (e.g., local) > ISO Images > Upload. Upload the Ubuntu Server 24.04 LTS ISO.

You can also download directly from the Proxmox host:

cd /var/lib/vz/template/iso/
wget https://releases.ubuntu.com/24.04/ubuntu-24.04.1-live-server-amd64.iso

Step 2: Create the VM

Click Create VM in the top-right corner of the web UI:

  • General: VM ID 100, Name: ubuntu-server
  • OS: Select the Ubuntu ISO, Type: Linux, Version: 6.x - 2.6 Kernel
  • System: Machine: q35, BIOS: OVMF (UEFI), EFI Storage: local-zfs, Add TPM: optional
  • Disks: Bus: VirtIO Block, Storage: local-zfs, Size: 32 GB, Discard: checked (for SSD TRIM)
  • CPU: Type: host (best performance), Cores: 2
  • Memory: 2048 MB (2 GB), Ballooning: enabled
  • Network: Bridge: vmbr0, Model: VirtIO (paravirtualized)

Step 3: Install the OS

Start the VM, open the Console, and follow the Ubuntu installer. Once installed, install the QEMU Guest Agent for better integration:

sudo apt update && sudo apt install -y qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent

Then enable the Guest Agent in Proxmox: VM > Options > QEMU Guest Agent > Enabled.

Step 4: Convert to a Template (Optional)

If you plan to deploy multiple VMs with the same base config, convert this VM to a template:

  1. Shut down the VM.
  2. Right-click > Convert to Template.
  3. To deploy a new VM, right-click the template > Clone > Full Clone.

This gives you a reusable golden image for rapid provisioning.

Creating LXC Containers

LXC containers are one of Proxmox’s greatest strengths. They share the host kernel, use a fraction of the RAM a VM requires, and start in seconds.

Download a Container Template

Navigate to your storage > CT Templates > Templates. Proxmox maintains an official repository of templates. Download debian-12-standard or ubuntu-24.04-standard.

Or from the command line:

pveam update
pveam available --section system
pveam download local debian-12-standard_12.7-1_amd64.tar.zst

Create the Container

Click Create CT:

  • General: CT ID 200, Hostname: docker-host, Password: (set a root password), Unprivileged: checked
  • Template: Select the downloaded template.
  • Disks: Root Disk: 8 GB on local-zfs
  • CPU: Cores: 2
  • Memory: 1024 MB, Swap: 512 MB
  • Network: Bridge: vmbr0, IPv4: DHCP (or static)
  • DNS: Use host settings

Start the container and open the console. You now have a running Debian or Ubuntu system in under 10 seconds.

Nested Docker Inside LXC

Running Docker inside an unprivileged LXC container requires a few tweaks. Edit the container’s configuration:

# On the Proxmox host
nano /etc/pve/lxc/200.conf

Add these lines:

features: keyctl=1,nesting=1

Restart the container, then install Docker inside it:

apt update && apt install -y curl
curl -fsSL https://get.docker.com | sh
docker run hello-world

This is one of the most efficient ways to run Docker workloads in a homelab --- you get near-native performance with the isolation and snapshot capabilities of LXC, at a fraction of the overhead of a full VM.

Backup Strategy with Proxmox Backup Server

A homelab without backups is a disaster waiting to happen. Proxmox Backup Server (PBS) is purpose-built for this role.

Why PBS Over vzdump Alone?

The built-in vzdump tool can create full backups of VMs and containers, but PBS adds critical capabilities:

  • Incremental backups --- only changed data blocks are transferred after the first full backup.
  • Client-side deduplication --- identical blocks across different VMs are stored only once.
  • Encryption --- AES-256-GCM client-side encryption so the backup server never sees your data in the clear.
  • Integrity verification --- built-in verification jobs ensure backups are not silently corrupted.
  • Granular restore --- browse and restore individual files from a backup without restoring the entire VM.

Setting Up PBS

PBS can run as a VM on your Proxmox host, on a separate physical machine, or even on a Raspberry Pi with attached USB storage. For a single-node homelab, running PBS in a VM is perfectly acceptable. The key is to store the backup datastore on a different physical disk than your VM storage.

  1. Download the PBS ISO from proxmox.com/en/downloads and install it (the process is similar to PVE installation).
  2. Create a Datastore in the PBS web UI, pointing to your backup storage path.
  3. In Proxmox VE, navigate to Datacenter > Storage > Add > Proxmox Backup Server. Enter the PBS IP, datastore name, and credentials.
  4. Create a Backup Job under Datacenter > Backup: select all VMs/CTs, set a schedule (e.g., daily at 2:00 AM), and set a retention policy.

For a homelab, a sensible retention policy might be:

keep-last: 3
keep-daily: 7
keep-weekly: 4
keep-monthly: 6

This keeps the last 3 backups regardless of age, plus daily backups for a week, weekly for a month, and monthly for six months. Deduplication keeps the actual disk usage far lower than you would expect.

The 3-2-1 Rule

Even with PBS, follow the 3-2-1 backup rule: 3 copies of your data, on 2 different media types, with 1 offsite. For a homelab, this might look like:

  1. Live data on your ZFS pool.
  2. PBS backup on a separate internal drive or NAS.
  3. Offsite sync to a remote PBS instance, a Backblaze B2 bucket via rclone, or a USB drive you rotate offsite.

GPU Passthrough Basics

GPU passthrough lets you dedicate a physical graphics card to a VM, giving it near-native GPU performance. This is essential for local AI/ML inference (Ollama, LocalAI), Plex hardware transcoding, or running a Windows gaming VM.

Prerequisites

  • CPU with IOMMU support: Intel VT-d or AMD-Vi (virtually all modern CPUs support this).
  • Motherboard/BIOS with IOMMU enabled: Look for “VT-d,” “AMD-Vi,” or “IOMMU” in BIOS settings.
  • Discrete GPU in its own IOMMU group: The GPU must be isolatable from other devices.

Step 1: Enable IOMMU in the Bootloader

Edit the kernel command line:

# For Intel CPUs
nano /etc/default/grub
# Change GRUB_CMDLINE_LINUX_DEFAULT to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

# For AMD CPUs
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

# Apply the change
update-grub

If you are using systemd-boot (common with ZFS installs):

# Edit the kernel command line
nano /etc/kernel/cmdline
# Add: intel_iommu=on iommu=pt (or amd_iommu=on iommu=pt)

# Apply
proxmox-boot-tool refresh

Reboot.

Step 2: Load VFIO Modules

# Add VFIO modules
echo -e "vfio\nvfio_iommu_type1\nvfio_pci\nvfio_virqfd" >> /etc/modules

# Blacklist GPU drivers on the host
echo -e "blacklist nouveau\nblacklist nvidia\nblacklist radeon\nblacklist amdgpu" >> /etc/modprobe.d/blacklist.conf

# Update initramfs
update-initramfs -u -k all

Reboot again.

Step 3: Verify IOMMU Groups

for d in /sys/kernel/iommu_groups/*/devices/*; do
    n=${d#*/iommu_groups/*}; n=${n%%/*}
    printf 'IOMMU Group %s ' "$n"
    lspci -nns "${d##*/}"
done

Your GPU (and its audio device) should be in their own IOMMU group. If they share a group with other critical devices, you may need ACS override patches (a more advanced topic).

Step 4: Assign the GPU to a VM

In the VM’s Hardware tab, click Add > PCI Device:

  • Select your GPU from the list.
  • Check All Functions to include the GPU’s audio controller.
  • Check PCI-Express for best performance.
  • If using NVIDIA, check ROM-Bar as well.

Set the VM’s Machine type to q35 and BIOS to OVMF (UEFI). Start the VM and install the appropriate GPU drivers (NVIDIA, AMD, or Intel) inside the guest OS.

Security Hardening

A Proxmox host accessible on your network should be hardened, even in a homelab. Here are the essential steps.

Enable the Proxmox Firewall

Proxmox includes a built-in firewall that can be configured at the datacenter, host, and VM level.

  1. Navigate to Datacenter > Firewall > Options and set Firewall: Yes.
  2. Add an input rule allowing TCP port 8006 (web UI) and TCP port 22 (SSH) from your management subnet:
# Datacenter > Firewall > Add Rule
Direction: IN
Action: ACCEPT
Source: 192.168.1.0/24
Dest. port: 8006
Protocol: TCP
  1. Enable the firewall on the node level as well: Node > Firewall > Options > Firewall: Yes.

The default policy for input traffic is DROP, so make sure your allow rules are in place before enabling the firewall to avoid locking yourself out.

Harden SSH

Edit /etc/ssh/sshd_config:

# Disable root password login (use SSH keys instead)
PermitRootLogin prohibit-password

# Disable password auth entirely (after setting up SSH keys)
PasswordAuthentication no

# Use only SSH protocol 2
Protocol 2

Generate and copy your SSH key:

# On your workstation
ssh-keygen -t ed25519 -C "homelab"
ssh-copy-id root@192.168.1.100

Restart SSH:

systemctl restart sshd

Install and Configure Fail2Ban

Fail2ban watches log files for failed authentication attempts and temporarily bans offending IPs:

apt install -y fail2ban

Create a Proxmox-specific jail:

cat > /etc/fail2ban/jail.local << 'EOF'
[proxmox]
enabled = true
port = https,http,8006
filter = proxmox
backend = systemd
maxretry = 3
findtime = 2d
bantime = 1h
bantime.increment = true
bantime.factor = 24
bantime.maxtime = 30d
EOF

Create the Proxmox filter:

cat > /etc/fail2ban/filter.d/proxmox.conf << 'EOF'
[Definition]
failregex = pvedaemon\[.*authentication (verification )?failure; rhost=<HOST> user=\S+ msg=.*
ignoreregex =
journalmatch = _SYSTEMD_UNIT=pvedaemon.service
EOF

Enable and start fail2ban:

systemctl enable --now fail2ban
fail2ban-client status proxmox

With incremental banning enabled, the first ban is 1 hour, the second is 24 hours, the third is 48 hours, scaling up to a maximum of 30 days. This makes brute-force attacks impractical.

Enable Two-Factor Authentication

Proxmox supports TOTP (time-based one-time passwords) natively:

  1. Navigate to Datacenter > Permissions > Two Factor.
  2. Click Add > TOTP.
  3. Scan the QR code with your authenticator app (Authy, Google Authenticator, or a hardware key).

This adds a second layer of protection to the web UI and API access.

Use Let’s Encrypt Certificates

Replace the self-signed certificate with a trusted Let’s Encrypt certificate:

  1. Navigate to Node > System > Certificates > ACME.
  2. Register an ACME account.
  3. Add a domain entry for your Proxmox hostname.
  4. Click Order Certificates Now.

This eliminates browser certificate warnings and secures the web UI with a valid TLS certificate. Note that this requires your Proxmox host to be reachable on port 80 for the HTTP-01 challenge, or you can use DNS-01 validation with a supported DNS provider.

Performance Tuning Tips

A few quick wins that make a noticeable difference:

Enable SSD TRIM for ZFS:

zpool set autotrim=on rpool
zpool set autotrim=on tank  # if you have a separate data pool

Set CPU governor to performance (if you prefer raw speed over power savings):

echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Enable KSM (Kernel Same-page Merging) to deduplicate identical memory pages across VMs:

echo 1 > /sys/kernel/mm/ksm/run

Proxmox enables this by default, but verify it is active on your system.

Use VirtIO drivers for all VM disks and network interfaces. VirtIO is paravirtualized, meaning the guest OS cooperates with the hypervisor for near-native I/O performance. This is dramatically faster than emulated IDE or E1000 devices.

What to Run on Your New Proxmox Homelab

Now that your infrastructure is ready, here are some popular workloads to deploy:

  • Reverse proxy (Traefik, Caddy, or Nginx Proxy Manager) --- route traffic to your services with automatic HTTPS.
  • Home Assistant --- home automation platform, runs great in a dedicated VM with USB passthrough for Zigbee/Z-Wave dongles.
  • Pi-hole or AdGuard Home --- network-wide ad blocking in an LXC container.
  • Nextcloud --- self-hosted file sync, calendar, and contacts.
  • Jellyfin or Plex --- media server with optional GPU transcoding via passthrough.
  • Gitea or Forgejo --- lightweight self-hosted Git server.
  • Uptime Kuma --- beautiful uptime monitoring dashboard.
  • Vaultwarden --- self-hosted Bitwarden-compatible password manager.

Each of these deserves its own guide, and we will be covering them in detail in future Cortex Cove articles.

Troubleshooting Common Issues

“No valid subscription” warning after update. The subscription nag patch gets overwritten when proxmox-widget-toolkit is updated. Reapply the sed command or use pve-nag-buster for automatic patching.

VM won’t start with GPU passthrough. Verify IOMMU is enabled (dmesg | grep -i iommu), check that VFIO modules are loaded (lsmod | grep vfio), and ensure the GPU is not bound to a host driver (lspci -nnk | grep -A3 "VGA").

ZFS pool shows DEGRADED. A drive has failed or been removed. Replace it with zpool replace tank /dev/old-disk /dev/new-disk and let the resilver complete.

Container cannot access the internet. Check that the bridge is configured correctly, the container has a gateway set, and /etc/resolv.conf inside the container has a valid DNS server.

High memory usage reported in the UI. ZFS ARC (Adaptive Replacement Cache) uses free RAM as a read cache. This is by design and will be released when VMs need it. The memory is not “used” in the traditional sense. If you want to limit ARC size:

echo "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.conf  # Limit to 4 GB
update-initramfs -u

Wrapping Up

You now have a complete roadmap for building a production-grade proxmox homelab from scratch. We covered hardware selection across every budget, walked through the full proxmox ve installation process, configured networking and storage with ZFS, deployed both VMs and LXC containers, set up backups with Proxmox Backup Server, touched on GPU passthrough for AI and media workloads, and locked everything down with firewall rules, fail2ban, and two-factor authentication.

This is a living guide. As Proxmox evolves --- and version 9 is already on the horizon --- we will update these instructions to reflect the latest best practices.

If you found this proxmox setup guide helpful, bookmark it and check back for our upcoming deep-dive articles on specific homelab services, advanced Ceph clustering, and Proxmox high-availability configurations. Every guide in the Cortex Cove homelab series builds on the foundation we established here.