Optimizing Internal Hyper-V Switches + SMB Multichannel

I have some Hyper-V VMs with unusual requirements:

  • Have SMB share on host that guests need to read/write to
  • Cannot use attached storage or Guest Services integration
  • Needs to be reasonably close to direct I/O speed
  • Host and multiple guests access a single share

To solve it, I decided to use an Internal Virtual Switch with SMB 3.0 multichannel. This switch is 10gbps, which should meet my needs (it is possible to increase this with SET, though). This appears to the host as a network adapter which I will call "vEthernet (Internal)" and to the guests as a network adapter which I will call "Internal". SMB multichannel provides much higher throughput for SMB transfers.

Steps:

  • Create file share on host
  • Create new Virtual Switch called Internal
  • Configure new "vEthernet (Internal)" NIC, don't rename
  • Set IP address 172.16.0.1/24
  • No gateway, no DNS
  • Disable "Register this connection's address in DNS" in IPv4 advanced
  • Disable IPv6
  • Go to driver properties (Configure button)
  • Set 9014 byte jumbo packets
  • Ensure Receive Side Scaling is enabled
  • Verify with PowerShell Get-NetAdapterRss that the adapter shows up as enabled
  • Verify SMB configuration:
  • Get-SmbServerConfiguration | Select EnableMultichannel should be True
  • Run Get-SmbServerNetworkInterface and you should see RSS Capable True on the NIC
  • Add an A record in your DNS for "internal" to 172.16.0.1 (or just use the IP)
  • Add NIC to guests with the "Internal" switch
  • Configure new NIC, rename to "Internal"
  • Set IP address 172.16.0.2/24
  • No gateway, no DNS
  • Disable "Register this connection's address in DNS" in IPv4 advanced
  • Disable IPv6
  • Go to driver properties (Configure button)
  • Set 9014 byte jumbo packets
  • Ensure Receive Side Scaling is enabled
  • Set Send Buffer Size and Receive Buffer Size to 4 MB (source)
  • Verify with PowerShell Get-NetAdapterRss that the adapter shows up as enabled
  • Verify SMB configuration:
  • Get-SmbClientConfiguration | Select EnableMultichannel should be True
  • Run Get-SmbClientNetworkInterface and you should see RSS Capable True on the NIC
  • Map the network share with the path \\internal\Share to a drive
  • Force traffic to the "internal" host (or 172.16.0.1) to go over the "Internal" NIC
  • Net-SmbMultichannelConstraint -InterfaceAlias "Internal" -ServerName "internal"
  • Run it again for the IP if you're paranoid
  • Run ATTO Benchmark on the mapped drive and see if you're within a reasonable percentage
  • Small transfer sizes will always be slower because of the overhead of the SMB connection
  • Observe CPU and Disk usage during the benchmark and see if you're encountering any bottlenecks
  • Remember, all this voodoo with switches and NICs and RSS is happening in kernel-level code, so you will be CPU-bound

My results were very similar when testing an SSD. On the left is the VM connecting through SMB on the internal NIC, and on the right is the host doing direct I/O.
Speed comparison result showing similar speeds

You can see there is a significant taper at the beginning for the VM, that's again because of the overhead with small files that can't really be optimized away. Because of that, I think the actual transfer rate per second is almost identical.