As the distributed block storage component of Arcfra Enterprise Cloud Platform (AECP), Arcfra Block Storage (ABS) introduces Boost mode to enhance the I/O link and optimize storage performance. This article will explore this feature further and compare AECP’s performance with and without enabling Boost mode.
In general, ABS Boost mode leverages vhost-user protocol to share memory between Guest OS, QEMU, and ABS, simplifying the I/O link and accelerating the processing of I/O requests.
Currently, there are two mainstream I/O device virtualization solutions:
Benefiting from Vring’s I/O communication mechanism, Virtio effectively reduces I/O latency and performs better than QEMU pure software emulation. This is why Virtio has become a prevalent paravirtualization solution for I/O devices across vendors.
In QEMU, the Virtio device is a PCI/PCIe device emulated for the Guest OS, which complies with PCI standards and features configuration space and interrupt functions. Notably, Virtio registered the PCI vendor ID (0x1AF4) and device IDs. Each device ID represents a device type. For example, the device ID for storage Virtio-BLK is 0x1001 and Virtio-SCSI is 0x1004.
Virtio consists of three parts: 1) the front-end driver layer, which is integrated into the Guest OS; 2) Virtqueue as the middle part, which is responsible for data transmission and command interaction; and 3) the back-end device layer, which processes the requests sent by the Guest OS.
Normally, in AECP, the entire I/O link consists of seven steps:
However, this process lacks efficiency. As QEMU connects the storage process through local Sockets, dataflows switch between user mode to kernel mode, causing data replication overhead. Besides, the iSCSI protocol layer also causes unnecessary processing consumption. If the storage process can directly receive and process local I/O, the performance reduction can be avoided, and the implementation of Virtio Offload to Storage Software can be achieved.
Vhost is a technology derived from Virtio standards to optimize I/O paths and improve the I/O performance of paravirtual Virtio devices in QEMU.
As mentioned above, the Virtio back-end device is used to handle Guest requests and I/O responses. The solution that puts the I/O processing module outside the QEMU process is called vhost. Since we aim to optimize the performance between two user-mode processes (in AECP), we use vhost-user scheme for storage acceleration. As in the vhost-kernel scheme, I/O load is processed in the kernel, we do not discuss this scheme in this article.
The data plane processing of vhost-user is mainly divided into primary and secondary. Generally, QEMU is the primary which is supplier of Virtqueue, and storage software is the secondary which is responsible for consuming I/O requests in the Virtqueue.
Advantages of the vhost-user include:
With vhost-user, the I/O link only takes five steps:
Note: Control information is transmitted between front-end vhost driver and back-end vhost device through the UNIX Domain Socket files.
We tested AECP’s benchmark performance with and without enabling Boost mode based on a three-node cluster (using two replicas).
* Higher IOPS/bandwidth and lower latency represent better performance.
As we can see from the test data, AECP clusters show higher performance and lower latency after enabling Boost mode in all performance tests, especially in read I/O scenarios.
Leveraging vhost-user, Boost mode optimizes I/O request processing and data transfer, improving AECP VM performance and reducing I/O latency. Users can enable Boost mode to support high-performance applications such as databases.
For more information on AECP, please visit our website.
Arcfra is an IT innovator that simplifies on-premises enterprise cloud infrastructure with its full-stack, software-defined platform. In the cloud and AI era, we help enterprises effortlessly build robust on-premises cloud infrastructure from bare metal, offering computing, storage, networking, security, backup, disaster recovery, Kubernetes service, and more in one stack. Our streamlined design supports both virtual machines and containers, ensuring a future-proof infrastructure.
For more information, please visit www.arcfra.com.