Investigate the latency behaviour of various host/guest networking configurations
Description
Attachments
Activity
Takahiro Akashi May 18, 2023 at 1:04 AM
The result is reported in
Linaro blog
https://www.linaro.org/blog/network-latency-with-time-sensitive-networking-on-virtual-machine/Linaro Connect session
https://resources.linaro.org/en/resource/jfTURCDTat6faXFK8PwqwK
Some unresolved issues are mentioned above.
We won't take any actions unless we get feedback on more realistic test conditions.
Takahiro Akashi July 8, 2022 at 3:28 AM
An initial result for case (d) was added in this table.
But please note that the values should not be compared with results
for other cases listed in this table since Intel i225 NIC, instead of
on-chip 10G NIC, is used to measure latencies in case (d).
(netperf)
$ netperf -H 192.168.20.1 -l -1000 -t TCP_RR -w 10ms -b 1 -v 2 --
-O min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
min avg max stddev
host2-to-host1 180 203 408 8.5
75 85 277 21.6 <= NEW (with i225)
vm-to-host1
tap 214 254 581 14.2
macvtap 217 244 567 13.0
vfio 80 102 317 19.6 <= NEW (with i225)
ovs 266 291 671 13.3
eBPF (XDP) 221 254 571 13.4
Takahiro Akashi April 15, 2022 at 5:33 AM
(netperf)
$ netperf -H 192.168.20.1 -l -1000 -t TCP_RR -w 10ms -b 1 -v 2 --
-O min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
min avg max stddev
host2-to-host1 180 203 408 8.5
vm-to-host1
tap 214 254 581 14.2
macvtap 217 244 567 13.0
ovs 266 291 671 13.3
eBPF (XDP) 221 254 571 13.4
Takahiro Akashi March 25, 2022 at 9:15 AM
As of today (Mar 25),
(netperf)
$ netperf -H 192.168.20.1 -l -1000 -t TCP_RR -w 10ms -b 1 -v 2 --
-O min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
min avg max stddev
host2-to-host1 180 203 408 8.5
vm-to-host1
tap 214 254 581 14.2
macvtap 217 244 567 13.0
eBPF (XDP) 221 254 571 13.4
Takahiro Akashi March 11, 2022 at 1:34 AM
My initial plan is to try the following different virtual network configurations
with KVM guest on Marvell’s MACCHIATBin board.
a.user (+ nat)
b.tap (+ bridge)
c.macvtap
d.NIC passthrough
e.OpenVSwitch (as a simple bridge)
f.eBPF-based bridge
For the record, as of today (Mar 11), I was successful in setting up the network
for (b) and (c) and have got an initial result of latency measurement.
1. Abstract
2. Networking Setups
.. 1. Test Setup
.. 2. Potential Packet Paths
.. 3. Host Networking
.. 4. KVM Guest with vhost networking
.. 5. Pass-through (SR-IOV or virtualised HW)
.. 6. Open vSwitch routing (Xen)
3. Notes
1 Abstract
══════════
As we move network endpoints into different VM configurations we need
to understand the costs and latency effects those choices will have.
With that understanding we can then consider various approaches to
optimising packet flow through virtual machines.
2 Networking Setups
═══════════════════
2.1 Test Setup
──────────────
The test setup will require two machines. The test controller will be
the source of the test packets and measure the round trip latency to
get a reply from the test client. The test client will be setup in
multiple configurations so the latency can be checked.
2.2 Potential Packet Paths
──────────────────────────
For each experiment we need to measure the latency of 3 different
packet reflectors. A simple ping pong running either via:
• xdp_xmit - lowest latency turnaround at the driver
• xdp_redir - bypass linux networking stack to user-space
• xpd_pass - normal packet path to conventional AF_INET socket
2.3 Host Networking
───────────────────
This is the default case with no virtualisation involved.
2.4 KVM Guest with vhost networking
───────────────────────────────────
This is a KVM only case where the vhost device allows packets to be
delivered directly from the guest kernels address space. It still
relies on the host kernels networking stack though.
2.5 Pass-through (SR-IOV or virtualised HW)
───────────────────────────────────────────
Either using direct pass-through to a discrete ethernet device or a
virtualised function. The control of the packet starts and ends in the
guests kernel.
2.6 Open vSwitch routing (Xen)
──────────────────────────────
Here the packets are switched into paravirtualized Xen interfaces by
the Dom0 kernel. I'm a little unsure as to what Open vSwitch uses to
route stuff and if it's the same as the existing eBPF stuff.
3 Notes
═══════
• [Notes from Nov15]
• [Fosdem 2020 talk on XDP]
[Notes from Nov15]
<https://docs.google.com/document/d/1TqdXnAX8sy9ow8BJo9yDeLoAJ0q6fAlpwb7eAyJPpAg/edit#>
[Fosdem 2020 talk on XDP]
<https://archive.fosdem.org/2020/schedule/event/xdp_and_page_pool_api/attachments/paper/3625/export/events/attachments/xdp_and_page_pool_api/paper/3625/XDP_and_page_pool.pdf>