Hyper-V GPU Passthrough: An Essential Guide for Beginners
In some scenarios, you may need to use a video card (containing a GPU), also called a graphic adapter or display adapter, on a virtual machine. However, using an emulated video card in a Hyper-V virtual machine may not be enough for tasks utilizing OpenGL, Direct3D, CUDA, and other hardware-related features. Fortunately, it is possible to attach a physical video card to a VM on a Hyper-V host by using the Hyper-V GPU passthrough feature.
What Is GPU Passthrough?
GPU Passthrough is a feature that allows you to connect a physical video card installed on a physical host to a virtual machine without emulation. As a result, the virtual machine can use a physical graphics adapter directly.
By default, Hyper-V virtual machines use an emulated graphics card, which relies on the Hyper-V host’s CPU. However, this approach only allows users to work with the most basic tasks but is not enough for tasks that require more graphics performance.
NOTE: A GPU can be integrated into the chipset on a motherboard or into a central processing unit (CPU), also called a processor (for the latest Intel processor generations and related architecture). A high-performance GPU is a chip on a discrete video card attached to a motherboard using the PCI Express interface (PCIe).
Key Benefits of GPU Passthrough in Hyper-V
A physical graphics card connected via GPU passthrough in a Hyper-V virtual machine can be used by home users, developers, designers, and others in specific scenarios. The most common scenarios are:
- Running applications using graphics-intensive workloads (graphic design, 3D modeling, AutoCAD drawing, engineering and calculations, game design and game development, etc.). Using hardware-accelerated rendering.
- Running games with hardware acceleration features. Some games may not work without a physical video card.
- Running machine learning (ML) and artificial intelligence (AI) applications using a GPU.
The benefits of the GPU passthrough mode are:
- Better graphics performance with VMs directly accessing a graphics card for graphic-intensive applications or games. Video playback is smoother. Hardware-accelerated graphics and the latest APIs are available.
- Flexible utilization of hardware resources. Using one Hyper-V host for multiple VMs with uneven graphics-intensive tasks in cases where using dedicated workstations is not optimal.
- Cost-efficiency. GPU passthrough can save costs in some scenarios, especially in terms of using hardware resources. This feature can be also used for a virtual desktop infrastructure (VDI).
- Security. The security advantages are similar to those of virtualization in general. If vulnerabilities are exploited in VMs, the VMs run in an isolated environment, and there are more possibilities to mitigate the issue. In case of serious issues, it is possible to restore VMs rapidly from a backup.
Requirements
To configure a VM with GPU passthrough on a Hyper-V host, you must meet certain hardware and software requirements. Not all video cards can be used for this feature.
Hardware specifications for GPU Passthrough
- The CPU on the Hyper-V host must support Intel-VT or AMD-V virtualization features. The appropriate virtualization feature must be enabled for your processor on your Hyper-V host in UEFI/BIOS settings.
- The Input-Output Memory Management Unit (IOMMU) must be supported by the CPU on the Hyper-V host. This feature is required for PCI passthrough, including video card or GPU passthrough.
- A video card with a GPU that supports GPU virtualization technologies, such as NVIDIA GRID or AMD MxGPU (Multiuser GPU). These technologies are vendor-specific. Using server-class hardware improves success rates. Older devices that use PCI Interrupts (INTx) are not supported.
- SR-IOV (Single Root Input/Output Virtualization) should be supported and enabled to avoid errors.
Software requirements for Hyper-V Passthrough
- Windows Server 2016 or later Windows Server version (preferred)
- Windows 10 or Windows 11
- The Hyper-V role (feature) must be enabled in Windows.
- The latest graphics drivers must be installed on the Hyper-V host and VM guest using GPU passthrough.
- Only Generation 2 Hyper-V VMs must be used for GPU passthrough with DDA.
Limitations and Unsupported Configurations
Note the configurations that are not supported for using GPU passthrough:
- VMs using Hyper-V Dynamic Memory, that is, the feature to allocate and deallocate RAM for a VM as needed, cannot use GPU passthrough.
- GPU Passthrough is available only on the highest Windows editions, such as Windows Server 2019 Datacenter.
- If Windows Subsystem for Linux is deployed on the Hyper-V host, then an error with code 43 can occur on the VM.
- Clustering features such as High Availability and VM live migration are not supported.
- Save and restore with VM checkpoints are not supported.
Setting Up GPU Passthrough: What You Need to Know
You should be aware that GPU passthrough configuration may be more complex than most regular VM configuration tasks. The scalability of virtual environments when using GPU passthrough is not as high when using the default method for video card emulation. You cannot migrate VMs using the GPU passthrough feature between Hyper-V hosts because these VMs are bound to physical graphic adapters installed in the host.
Up to Windows Server 2016, the RemoteFX feature was used for GPU passthrough to virtual machines. After Windows Server 2019 and Windows 10 build 1809, the RemoteFX feature is no longer available, and only Hyper-V Direct Device Assignment (DDA) can be used to pass through a graphics card, similarly to any other PCI/PCIe devices, including NVMe storage devices. DDA is the analog of VMware PCI passthrough.
RemoteFX is an extension for RDP (Microsoft Remote Display Protocol) used to connect USB devices (that are attached to the host) to a virtual machine. RemoteFX vGPU was used for graphics cards accordingly. RemoteFX allowed the sharing of one GPU for multiple VMs, which is not possible with Discrete Device Assignment. RemoteFX also limited the amount of dedicated video memory to 1 GB for each VM and the maximum FPS is 30. In the case of DDA, there is no such video RAM limitation, and FPS can be set to 60.
If you use Windows Server 2016 Datacenter or other Windows versions and editions that used to support RemoteFX, you can install a Windows update to completely remove this feature from Hyper-V and avoid any potential security vulnerabilities. The official version from Microsoft is that RemoteFX was removed to avoid architectural Hyper-V vulnerabilities. RemoteFX was attractive for high-density virtual environments when it was not possible to assign a dedicated physical graphics card to each VM. Note that DDA works on Windows Server versions (not on client versions, such as Windows 10).
GPU partitioning in PowerShell can be a solution on client Windows versions, but copying video drivers from the host OS to a Guest OS (that must be the same as on the host OS) is tricky. Generation 2 VMs must be used. A graphics card must support GPU partitioning in this case.
Configuring GPU Passthrough in Hyper-V
Follow the steps below to prepare the environment and configure Hyper-V GPU passthrough.
Preparing to configure GPU passthrough
- Ensure that your hardware and software support GPU virtualization.
- Enable Intel VT-d or AMD-V virtualization features for your CPU in UEFI/BIOS on the Hyper-V host.
- Enable IOMMU. The IOMMU setting can be enabled in different ways on different motherboards with different UEFI/BIOS versions. Sometimes, this setting can be located in the North Bridge configuration. Or IOMMU can be enabled when you enable Intel VT-d or AMD-V. Check the feature called Memory Remap in UEFI/BIOS.
You can check whether IOMMU is enabled on a Hyper-V host machine with the PowerShell command (as Administrator):
(Get-VMHost).IovSupport; (Get-VMHost).IovSupportReasons
True – enabled; False – disabled.
- Remove all checkpoints for the VM that you are going to configure using GPU passthrough.
If automatic checkpoints are enabled for the VM, you can disable them with the PowerShell command:
Set-VM -Name VMName -AutomaticStopAction TurnOff
GPU passthrough in Windows Server
- To set cache and limits for 32-bit (3 GB for 32-bit operating systems) MIMO space, run this PowerShell command as Administrator:
Set-VM -Name VMName -GuestControlledCacheTypes $True -LowMemoryMappedIoSpace 3Gb -HighMemoryMappedIoSpace 33280Mb
Alternatively, you can use three separate commands:
Set-VM -GuestControlledCacheTypes $true -VMName YourVMName
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName YourVMName
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName YourVMName
These limits define the memory that makes the device accessible for the VM. You can use a machine profile script from Microsoft to set the most precise MIMO limit values. The optimal values can be different for different video adapters. If you get a message that there are not enough resources when starting a VM, you should shut down the VM and modify these values. 33280 MB is used for a MIMO space greater than 32-bit.
- Check the physical address of the PCI Express device (the device’s location path), which is the needed graphics card that you want to pass through.
It can be done in Device Manager. You can open Device Manager by running the
devmgmt.msc
command. In Device Manager:- Right-click the needed display adapter in the Display Adapters section and hit Properties in the context menu.
- Select the Details tab in the adapter properties window.
- Select the Location paths property in the drop-down menu and copy the value (the values can be different for each computer).
You can also use PowerShell to identify a device’s location path:
Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq "Display"}|select Name,InstanceId
and
Get-PnpDevice -Class Display | ForEach-Object { Write-Output "$($_.FriendlyName) has a device id of $($_.DeviceId) and is located at $($_ | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths | Select-Object -ExpandProperty Data | Where-Object { $_ -like "PCIROOT*" })"; }
The output should contain a string like this:
‘PCIROOT(0)#PCI(0300)#PCI(0000)’
- Disable this graphics card in Device Manager. Right-click the video adapter and hit Disable device in the context menu.
- Dismount a disabled display adapter in PowerShell:
Dismount-VmHostAssignableDevice -LocationPath "PCIROOT(0)#PCI(0300)#PCI(0000)" -Force
Where:
-Force is required when a partition driver (optional) is not installed. This is not a driver for a graphics card installed in a guest OS. Sometimes, a device vendor can provide this security mitigation driver for a device. If you are going to install this driver, you should install it before you dismount the PCI Express device from the host partition.
The location path used in this command is just for illustration purposes, and you should use your specific value.
- Run the command to assign this video card to a virtual machine with GPU passthrough via DDA:
Add-VMAssignableDevice -VMName VMName -LocationPath "PCIROOT(0)#PCI(0300)#PCI(0000)"
- Power on the VM and verify whether a physical video card is displayed in the Device Manager of the Windows VM together with the default emulated video adapter called Microsoft Hyper-V video.
- Install drivers for the video card on the VM’s guest OS. You can download video drivers on the official NVIDIA or AMD website.
- If you want to disconnect the video card from the VM, stop the VM and use the command on the host:
Remove-VMAssignableDevice -VMName YourVMName -LocationPath $locationPath
Next, run the command to connect the video card back to the Hyper-V host:
Mount-VMHostAssignableDevice -LocationPath $locationPath
Configuration in Windows 10
On client Windows versions, such as Windows 10 and Windows 11 (starting from Windows 10 build 1903), the workflow to configure Hyper-V GPU passthrough is different and is possible by using the GPU partitioning method:
- Check whether your video card supports GPU partitioning in Windows 10 with the PowerShell command:
Get-VMPartitionableGpu
In Windows 11, the command is:
Get-VMHostPartitionableGpu
- For GPU passthrough to a VM, the
Add-VMGpuPartitionAdapter
cmdlet is used. However, you must copy graphic drivers from the Hyper-V host machine to the virtual machine. Note that the driver version must be the same. A free Easy-GPU-PV script can be used to copy drivers because this is a tricky process. Download this script in a ZIP file and extract the contents of the ZIP archive to a folder on the Hyper-V host. This script can use GPU paravirtualization by utilizing mechanisms used for Windows Subsystem for Linux (WSL2) and Windows Sandbox. - Open PowerShell as Administrator and run the command to allow script execution:
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass -Force
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
- Shut down the VM. Run the script from the folder where the downloaded script is extracted to copy installed graphic drivers from the Hyper-V host to the VM and install the drivers on the VM:
.\Update-VMGpuPartitionDriver.ps1 -VMName YourVMName -GPUName "AUTO"
- Configure the VM to make it ready for GPU passthrough and video card association:
Set-VM -VMName VMName -GuestControlledCacheTypes $true -LowMemoryMappedIoSpace 1Gb -HighMemoryMappedIoSpace 32Gb
Add-VMGpuPartitionAdapter -VMName YourVMName
- If you update the graphic drivers on a Hyper-V host, you must update the graphic drivers on the VM too. The VM must be powered off.
.\Update-VMGpuPartitionDriver.ps1 -VMName YourVMName -GPUName "AUTO"
- If you need to remove a video card from the VM, you can use the command:
Remove-VMGpuPartitionAdapter -VMName "YourVMName"
- If you update the graphic drivers on a Hyper-V host, you must update the graphic drivers on the VM too. The VM must be powered off.
If the video card has been connected to the VM successfully, you should see the appropriate display adapter in the Device Manager of the VM. You will also see visual effects for the guest Windows desktop theme, such as transparency, etc.
Note that some games and applications may not run even after configuring GPU passthrough. This can happen when an application forcibly initiates a compatibility check of the graphics card. The information about the connected video card in the guest OS is different from the information on the host OS. Some applications can perform the “running in a VM” check.
If applications using OpenGL do not work in the VM, installing OpenGL Compatibility Pack can help in some cases.
When using the GPU partitioning method for Windows 10 and Windows 11, the video adapter model displayed in Device Manager differs from the physical graphic adapter model displayed on the Hyper-V host. You can see a special Microsoft driver for this device. Vendor-specific tools, such as NVIDIA Control Panel (or AMD Control Panel), are not available in the VM.
Another issue that you may encounter happens if you close an RDP connection without disconnecting from an RDP session. In this case, all GPU memory can be disconnected, and all applications using the GPU will notify you about inaccessible video memory. These applications will stop working in this case, and reconnection via RDP will not fix the issue. Restarting applications using the GPU will be required.
Troubleshooting GPU Passthrough Issues
If you configured Hyper-V GPU passthrough and connected a video card to a VM, but the video card doesn’t work properly, check the following:
- Make sure that the latest graphic drivers are installed and there are no driver-related errors. Open Device Manager and check the device and driver status. You must install drivers downloaded from official vendor’s websites (NVIDIA, AMD, Intel) and not by using Windows Update.
- Ensure that you have assigned enough MIMO space for your VM.
- Check that the GPU passthrough configuration is supported by the vendor for your graphic adapter. Not all video cards of customer series can support this feature. Vendors can prefer enabling GPU passthrough only for top video cards.
- An application running inside a VM must support your video card and its drivers for proper work.
- Enable a group policy to use the GPU when connecting via Remote Desktop to a VM:
Computer Configuration\Administrator Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment\Use hardware graphics adapters for all Remote Desktop Services sessions
Set the group policy value to Enabled.
- If you see an error like “The operation failed because the object was not found” or Error 12, try to add the registry keys in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\HyperV
with the values:
RequireSecureDeviceAssignment = 0 (REG_DWORD)
RequireSupportedDeviceAssignment = 0 (REG_DWORD)
You can set these values with PowerShell commands:
Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows\HyperV" -Name "RequireSecureDeviceAssignment" -Type DWORD -Value 0 -Force
Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows\HyperV" -Name "RequireSupportedDeviceAssignment" -Type DWORD -Value 0 -Force
- Disable hypervisor checks for graphics drivers inside the VM. This configuration can be done in Enterprise Windows Driver Kit (WDK).
- Mount the WDK ISO file to the virtual DVD drive of the VM.
- Execute E:\LaunchBuildEnv.cmd (where E: is a virtual DVD drive of the VM) and then run the powershell command.
- Go to the directory where the
Remove-HypervisorChecks.ps1
script is located. - Run the command to remove hypervisor checks for a video driver (on an example of NVIDIA):
./Remove-HypervisorChecks.ps1 -Nvidia -DriverPath "C:\path-to-driver\package.exe"
- Wait until script execution is completed.
- Copy the prepared driver (a patched-driver.zip file) to the virtual machine, enable the test mode on the VM, and install the driver in a guest OS.
- If you encounter Error 43, ensure that the GPU and sound PCI bus (NVIDIA video cards may also have sound features) are kept together when using virtual machine GPU passthrough.
- If you see the following error when running the command:
Update-VMGpuPartitionDriver.ps1 -VMName "YourVMName" -GPUName "AUTO"
This error can be caused by multiple partitions (volumes) on a virtual hard disk. Try to set the hidden attribute for non-system (OS) partitions or temporarily delete these partitions after copying the needed data. An alternative option is to set the needed system (OS) partition explicitly in the PowerShell script or command.
Conclusion
Using a virtual machine with GPU passthrough on a Hyper-V host can be the optimal solution in some cases but be aware of the limitations. Server-grade hardware and Windows Server operating systems are generally preferred for using a discrete video card on virtual machines. Check for supported software and hardware before starting the configuration. Don’t forget to back up Hyper-V VMs to avoid losing data and time if something goes wrong when configuring a video card for VMs.