2023-06-15 The SPDK Open Discussion Meeting notes
Date
Jun 15, 2023, UTC+8, 15:00 - 16:00
Recording(Mandarin):
Video Conferencing, Web Conferencing, Webinars, Screen Sharing
Passwd: 1$EjdV*M
Participants
@Willen Yang
@Jun He (Deactivated)
@Zhangfeng
@Qiqing Wu
@Qinfei Liu
@Kevin Zhao
@Xinliang Liu
Discussion topics
Topic 1: SPDK performance result report for Arm64/Kunpeng, SPDK Arm64 CI.
Background: Arm is an IP company that does not offer the real SOC. For vendor neutral reasons, it is not proper for Arm to publish a dedicated Arm64 server CPU performance report just as Intel does in the SPDK community. Hisilicon could do this for Kunpeng in the SPDK community with Linaro.
Jun He: The SPDK community does not have Arm64 machines, while Mellanox and Broadcom has setup the external CI for patch landing, but the CI only covers very few test suites. Unlike DPDK now in Linux foundation, the SPDK upstream is wholly controlled by Intel and most of the maintainers are from Intel.
Hisi/Linaro could help to set up the Arm64 CI to make it fully supported in the community, and can also consider the SPDK openEuler support or release also. Fully functional CI should be a precondition of the performance result(to be confirmed)
Qinfei: Contacted with SPDK release manager Cao Gang from Intel, to discuss the SPDK release procedure in openEuler. Now the SPDK version in openEuler is relatively old.
Topic 2: SPDK user scenario now observed by Hisilicon(Willen Yang, Qiqing Wu, Qinfei Liu)
Qinfei:The Huawei’s commerial customers now are using SPDK for their interal storage system development. Their expectation is for Arm64 to fully support, CI, release and performance. Some storage vendors and hyperscalers are leveraging SPDK for high performance storage such as Tencent and China Mobile. They other scenario is using SPDK vhost to support the VM side software.
Topic3: Hisilicon sharing the progress in SPDK usage and performance. (Jun He, Qinfei Liu, Qiqing Wu)
SPDK TL0 performance increasing merged into ISA-L
Ceph-nvf
aNOF. The Pmem could serve as the NOF cache to reduce the latency. The PMEM is like Nvdimm-N, and also this is totally guaranteing the whole server board, the data protecting is managed by firmware. That means that we don’t need the Arm64 flush instruction which does not have better performance.
vhost user performance reducing:
Jun: Arm has proposed an vhost scalability patch which can solve this problem, this one is still in review, the performance will increase linearly with the core increasing.
PGO optimization in the high concurrency scenario, the vhost does not reach the bottleneck.
Performance parameters: lack of the BRANCH_MISS
Arm released the perforamce analysis whitepaper. Neoverse V1 Performance Analysis whitepaper
https://gitlab.arm.com/telemetry-solution/telemetry-solution
VFIO-user: Arm has contributed a lot to the VFIO-user,this mechanism can offer the interface to VM and does not need any change in hypervisor and kernel, just levaging the original NVME driver is fine, which would be quite convenient.
The Qemu virt-io has merged some drivers feature for virtio-blk,virt-scsi
Topic 4: The Arm OSS work in SPDK community(Jun He)
Vhost scalability optimization as mentioned before.
Support NVME 2.0 fully functions in SPDK. The NVME 2.0 mainly added the KV function.
Todo:
vfio-user support in kata-container
If Hisi has dedicated vhost-user performance, Arm can help with the issue analysis.
Topic 5: computational storage(Jun He, Qiqing Wu)
Idea is that offloading to the device, just like other IPU.
The on-disk Arm64 chips are usually for data filters, and do not work on the date compression.
Reference to the NVME TP4091 specification can offer more information.