Comment Re:Surprised the market is still as large as it is (Score 4, Interesting) 43
HDDs are still the most cost effective solution for large storage arrays that don't need particularly fast random data access, although putting an SSD in front of the drive array to act as a cache can make even some of those workloads viable. I think the issue has been more that the size of the array where that becomes a significant enough cost difference to offset the "screw it, let's just go all-in on SSD" has been increasing rapidly.
For instance, it used to be that media creatives would have a SSD for their go-to / work drive and a high-TB HDD or RAID to store the bulk media data, but - at least until AI blew the market apart - unless you were either seriously budget-limited or producing a vast amount of raw content, then a lower-spec high capacity multi-TB SSD or two was a potentially affordable option. In high-end server land, it was similar; you were spending so much on things like per-core software subscription licenses and however many chassis full of CPUs/RAM, that the storage uplift from HDD to SSD on the drive arrays (excluding the stuff that really needs to be SSD, like VM image storage) is largely a rounding error for PO approval until you get up into the 100s of TB or even PB range. But again, then along came AI...
I suspect a lot of people with upcoming hardware refreshes and large SSD drive arrays are going to be taking a good hard look at how much of that data *really* needs to be on SSDs until the AI bubble pops. It might be a bit of a last hurrah for the tech, but the next few years could be very good for distributors and other bulk suppliers of HDDs if those reviews go the way I expect.
For instance, it used to be that media creatives would have a SSD for their go-to / work drive and a high-TB HDD or RAID to store the bulk media data, but - at least until AI blew the market apart - unless you were either seriously budget-limited or producing a vast amount of raw content, then a lower-spec high capacity multi-TB SSD or two was a potentially affordable option. In high-end server land, it was similar; you were spending so much on things like per-core software subscription licenses and however many chassis full of CPUs/RAM, that the storage uplift from HDD to SSD on the drive arrays (excluding the stuff that really needs to be SSD, like VM image storage) is largely a rounding error for PO approval until you get up into the 100s of TB or even PB range. But again, then along came AI...
I suspect a lot of people with upcoming hardware refreshes and large SSD drive arrays are going to be taking a good hard look at how much of that data *really* needs to be on SSDs until the AI bubble pops. It might be a bit of a last hurrah for the tech, but the next few years could be very good for distributors and other bulk suppliers of HDDs if those reviews go the way I expect.