In the journey of building and developing systems, especially in homelab or dev/test environments, I believe many of you will encounter a familiar challenge: how to simulate complex environments like Kubernetes clusters, vCenter with multiple ESXi hosts, or even multi-domain Active Directory services, all while having only a few limited physical servers? Buying new hardware isn’t always feasible, and personal computer resources are limited.
I run a homelab with Proxmox VE managing 12 VMs and containers — this is a playground to test everything before moving it to production. To test more complex scenarios like migrating VMs between hypervisors, or building a complete vCenter environment, I needed a more flexible solution than just creating regular VMs.
The problem arises when I want to install a hypervisor like ESXi or KVM inside a virtual machine running on Proxmox. At first glance, this seems impossible because hardware virtualization features (like Intel VT-x or AMD-V) are usually reserved for the highest-level hypervisor.
The Real Problem: Why Can’t ESXi/KVM Be Installed in a VM “Normally”?
When you try to install a hypervisor like ESXi or KVM into a virtual machine (Guest OS) created on Proxmox VE, you’ll quickly realize that it doesn’t work as expected. Error messages about missing hardware virtualization support are quite common. The main reason here is the architecture of virtualization.
The Type 1 hypervisor (in this case, Proxmox VE) takes control of the CPU’s hardware virtualization features (Intel VT-x or AMD-V).
When you create a regular virtual machine on Proxmox, this hypervisor provides a virtualized environment for the Guest OS, but it does not expose or pass-through the CPU’s hardware virtualization features to that Guest OS. This means that the Guest OS, even if it’s a full operating system, cannot “see” the physical CPU’s virtualization capabilities, and therefore, cannot itself become a hypervisor to run other nested virtual machines.
Deeper Analysis of the Cause: The Importance of Intel VT-x/AMD-V
For a hypervisor to operate efficiently, it needs direct access to the CPU’s virtualization instruction sets (Intel VT-x for Intel and AMD-V for AMD). These instruction sets help the hypervisor isolate resources, manage memory, and switch contexts between virtual machines efficiently with minimal overhead.
When Proxmox VE boots, it automatically uses these instruction sets. If you don’t “tell” Proxmox that you want your virtual machine to also be able to use these instruction sets to virtualize itself, then the nested virtual machines will not have that capability. This is the essence of the problem and the reason for Nested Virtualization.
Solutions for Building Complex Labs
There are several approaches to building a complex lab with limited physical resources:
- Buy More Hardware: This is the simplest but also the most expensive method. Buy additional servers or workstations to install more physical hypervisors. Not flexible in terms of cost and space.
- Use Containers (Docker, LXC): For applications or services that require environment isolation, containers are an excellent choice. They are lightweight, start quickly, and are resource-efficient. However, containers cannot completely replace virtual machines when you need to simulate a full operating system or another hypervisor. You cannot install ESXi or a complete Windows Server into a Docker container.
- Nested Virtualization: This is the solution I will focus on guiding you through. It allows you to run a hypervisor (e.g., ESXi, KVM) inside a virtual machine (e.g., a VM on Proxmox VE), and then run nested virtual machines inside that hypervisor. The benefits of Nested Virtualization are clear:
- Hardware Savings: With just one physical server running Proxmox, you can create multiple independent lab environments, significantly reducing investment costs.
- Flexible and Fast: Easily create, delete, and clone lab environments with just a few clicks, perfect for learning, testing new scenarios without affecting the main system.
- Simulate Production Environments: Allows you to test complex architectures before actual deployment, minimizing risks.
The Best Approach: How to Configure Nested Virtualization on Proxmox VE
To fully utilize Nested Virtualization, we need to perform some configurations on both the physical Proxmox host and the Level 1 virtual machine (the Guest OS where the hypervisor will be installed). I will provide specific step-by-step instructions.
Step 1: Check and Enable Nested Virtualization on the Proxmox VE Host
First, you need to ensure that the CPU of the physical server running Proxmox VE supports virtualization (Intel VT-x or AMD-V) and that it is enabled in the BIOS/UEFI. Most modern CPUs have this feature.
To check if Nested Virtualization is already enabled, SSH into your Proxmox host and run the following commands:
cat /sys/module/kvm_intel/parameters/nested # For Intel CPUs
cat /sys/module/kvm_amd/parameters/nested # For AMD CPUs
If the result is Y or 1, Nested Virtualization is enabled. If it’s N or 0, you need to enable it.
Temporary Activation (effective until reboot):
# For Intel CPUs
modprobe -r kvm_intel
modprobe kvm_intel nested=1
# For AMD CPUs
modprobe -r kvm_amd
modprobe kvm_amd nested=1
After running the commands above, re-check using cat /sys/module/kvm_intel/parameters/nested (or kvm_amd). The result should be Y or 1.
Permanent Activation (persists after reboot):
For Nested Virtualization to be automatically enabled every time the Proxmox host boots, you need to add configuration to the module file:
# For Intel CPUs
echo "options kvm_intel nested=1" > /etc/modprobe.d/kvm-intel.conf
# For AMD CPUs
echo "options kvm_amd nested=1" > /etc/modprobe.d/kvm-amd.conf
Then, update initramfs and reboot the Proxmox host:
update-initramfs -u -k all
reboot
After the host reboots, re-check the Nested Virtualization status to ensure it has been permanently enabled.
Step 2: Create a Level 1 Virtual Machine (Guest OS) to Install the Hypervisor (ESXi, KVM/Ubuntu Server)
Now, we will create a virtual machine on Proxmox VE to install a Level 2 hypervisor (e.g., ESXi or an Ubuntu Server VM with KVM). The key here is the VM’s CPU configuration.
VM creation steps:
- On the Proxmox VE interface, click “Create VM”.
- In the “General” tab, name the VM.
- In the “OS” tab, select the ISO of the hypervisor operating system you want to install (e.g., ESXi installer ISO, Ubuntu Server ISO).
- In the “System” tab, leave the default options or customize if needed.
- Most Important: Configure the CPU in the “CPU” tab
- Type: Select
host. Choosing thehosttype will help the Level 1 VM acquire virtualization capabilities similar to the physical CPU of the Proxmox host. Alternatively, you can select kvm64 and add the flags+vmx(for Intel) or+svm(for AMD) to the “Flags” field if thehostoption does not work as expected or if you want more detailed control. - Cores: Set an appropriate number of cores.
For example, to install ESXi in a VM, you need to allocate at least 2 CPU cores.
- Type: Select
- In the “Memory” tab, allocate sufficient RAM for the Level 2 hypervisor and its nested virtual machines. For example, ESXi requires a minimum of 4GB RAM, while KVM on Ubuntu depends on your needs.
- In the “Disk” tab, create a disk for the Guest OS hypervisor.
- In the “Network” tab, configure the appropriate network bridge.
- Review and “Finish” to create the VM.
After the VM is created, start it and proceed with installing ESXi or Ubuntu Server (with KVM) as usual. Once installed, you will have a hypervisor running inside your Proxmox virtual machine.
Step 3: Verify Nested Virtualization Inside the Guest OS Hypervisor
After installing ESXi or KVM/Ubuntu Server into the Level 1 VM, you need to confirm that virtualization capabilities have been successfully passed through.
- For ESXi: Log in to your ESXi host (running in the VM). Check the CPU information. You should see “Hardware Virtualization” enabled (usually “Enabled” or “HV: Yes”). You can try creating a nested VM inside ESXi to see if it works.
- For KVM on Ubuntu Server (in VM): SSH into the Ubuntu Server VM and run the command:
grep -E 'vmx|svm' /proc/cpuinfoIf you see the
vmx(Intel) orsvm(AMD) flags in the output, it means Nested Virtualization is active and KVM can use hardware virtualization features to run nested virtual machines.
Step 4: Create a Level 2 Virtual Machine (Guest of Guest)
Once Nested Virtualization is confirmed to be working, you are free to create nested virtual machines (Guest of Guest) inside your Level 2 hypervisor (ESXi or KVM/Ubuntu Server) just as you would on a physical hypervisor.
For example, if you have installed ESXi in a Level 1 VM, you can use vSphere Client (or Host Client) to create Windows, Linux VMs, etc., within that ESXi instance. Similarly, if you use KVM on Ubuntu Server in a Level 1 VM, you can use virt-install, virsh, or virt-manager to create and manage nested virtual machines.
Conclusion
Nested Virtualization is an incredibly powerful technique that opens the door to building complex lab environments without requiring significant investment in physical hardware. With this solution, you can freely experiment with system architectures, learn new technologies, and prepare for real-world deployments in the most efficient way possible. I hope this guide helps you easily configure and leverage Nested Virtualization on Proxmox VE for your homelab or dev/test environment!
