Kioxia demonstrates Offload RAID scheme for NVMe drives

george

Kioxia demonstrates Offload RAID scheme for NVMe drives

At FMS 2024, Kioxia showed off a proof of concept for its proposed new RAID offload methodology for enterprise-class SSDs. The impetus for this is pretty clear: as SSDs get faster with each generation, RAID arrays have a serious problem maintaining (and scaling) performance. Even in cases where RAID operations are handled by a dedicated RAID card, a simple write request to, say, a RAID 5 array would require two reads and two writes to different drives. In cases where there’s no hardware acceleration, the data from the reads has to travel all the way back to the CPU and main memory for further processing before the write can be performed.

Kioxia has proposed using PCIe’s direct memory access feature along with the SSD controller’s Controller Memory Buffer (CMB) to avoid moving data to and from the CPU. The required parity calculations are performed by an accelerator block residing in the SSD controller.

In Kioxia’s PoC implementation, the DMA engine can access the entire host address space (including the peer SSD’s BAR-mapped CMB), allowing it to receive and transmit data as required from neighboring SSDs on the bus. Kioxia noted that their PoC offload resulted in a close to 50% reduction in CPU utilization and a greater than 90% reduction in system DRAM utilization compared to software RAID executed on the CPU. The proposed offload scheme can also support scrubbing operations without consuming host CPU cycles for the parity calculation task.

Kioxia has already taken steps to share these features with the NVM Express working group. If accepted, the proposed offload scheme would be part of a standard that could become widely available from multiple SSD vendors.

Source link

Leave a Comment