At FMS 2024, Phison dedicated a significant booth space to its enterprise/data center SSD and PCIe retimer solutions, in addition to its consumer products. As a controller/silicon supplier, Phison has historically worked with drive partners to bring its solutions to market. On the enterprise side, its partnership with Seagate on the X1 series (and later Nytro-branded enterprise SSDs) is fairly well-known. Seagate provided a list of requirements and had input on the final firmware before qualifying the drives themselves for its data center customers. Such qualification involves a significant investment of resources that only large companies can afford (excluding most second-tier consumer SSD vendors).
Phison introduced the Gen 5 X2 platform at last year’s FMS as a follow-up to the X1. However, with Seagate focused on its HAMR ramp and fighting other battles, Phison decided to go through the X2 qualification process. In the bigger picture, Phison also realized that the white-labeling approach to enterprise-class SSDs wasn’t going to work in the long run. As a result, the Pascari brand was born (ostensibly to make Phison’s enterprise-class SSDs more accessible to end consumers).
Under the Pascari brand, Phison has various product lines designed for different applications, from high-performance X-Series enterprise drives to B-Series boot drives. The AI-Series comes in variants that support up to 100 DWPD (more on that in the aiDAPTIVE+ subsection below).
The D200V Gen 5 took the top spot among the drives on display with its leading capacity of 61.44TB (a 122.88TB drive is also planned for the same line). The use of QLC in this capacity-focused line lowers sustained sequential write speeds to 2.1GBps, but they are intended for read-intensive workloads.
On the other hand, the X200 is a Gen 5 eTLC drive that boasts sequential write speeds of up to 8.7 GBps. It comes in read-only (1 DWPD) and mixed-mode (3 DWPD) variants with capacities up to 30.72 TB. The X100 eTLC drive is an evolution of the X1/Seagate Nytro 5050 platform, albeit with newer NAND and larger capacities.
These drives come with all the usual enterprise features, including power loss protection and FIPS certification. While Phison didn’t specifically advertise it, newer NVMe features like flexible data placement should become part of the firmware features in the future.
100 GBps with two HighPoint Rocket 1608 cards and Phison E26 SSDs
While this wasn’t strictly a corporate demo, Phison had a station showing 100GBps+ of sequential reads and writes using a normal workstation. The trick was to install two HighPoint Rocket 1608A expansion cards (each with eight M.2 slots) and place 16 M.2 drives in a RAID 0 configuration.
HighPoint Technology and Phison have collaborated to qualify E26-based drives for these types of applications. We’ll have more on that in a later review.
aiDAPTIV+ Pro Suite for AI Training
One of the more interesting demonstrations at the Phison booth was the aiDAPTIV+ Pro package. At last year’s FMS, Phison showed off a 40 DWPD SSD for use with Chia (thankfully, that fad has passed). The company has been working on the extreme endurance aspect and bumping it up to 60 DWPD (which is standard for SLC-based cache drives from Micron and Solidigm).
During FMS 2024, the company took that SSD and added a middleware layer on top of it to ensure that workloads remained more sequential. This increases the endurance rating to 100 DWPD. Now, that middleware layer is actually part of their AI training suite aimed at SMBs that don’t have the budget for a full-fledged DGX workstation or for local tuning.
Retraining models using these AI SSDs as an extension of GPU VRAM can provide these companies with significant TCO benefits, as expensive AI training GPUs can be replaced with a set of relatively inexpensive, off-the-shelf RTX GPUs. This middleware has licensing aspects that are generally tied to purchasing AI-series SSDs (which come with Gen 4 x4 interfaces, currently in U.2 or M.2 form factors). Using SSDs as a caching layer can enable fine-tuning of very large models using a minimal number of GPUs (without having to use them primarily for HBM capacity).