Data storage technology is about to accelerate at a whole new stride with the emergence of NVMe protocols and 3D Xpoint ultra-dense NVM.
Our last post noted how this progression culminated from the gradual evolution of SSD technology from bulky and suitable only for deep pockets into a viable option for all. We also noted how new NVMe protocols allow unprecedented levels of read/write speed, approaching 4 Gbps using four-lane PCI express setups and even higher numbers over technologies like Omnipath. New 3D Xpoint technology similarly creates unimaginable heights for storage capacity and IOPS within the SSD.
Combined with other capabilities that we will mention in a moment, these two disruptions will require a complete rethinking of the way data is handled in tiered storage solutions. ISVs attempting to provide products in this new environment must meet the hardware and integration demands that will become de facto requirements for their customer base.
Why 3D Xpoint could be even more revolutionary than NVMe
What we did not mention before is how 3D Xpoint will enable DRAM-type non-volatile memory functionality, dramatically boosting the performance of data centers and server-intensive applications. “Combined with DRAM, 3D XPoint servers will be able to support 4x the memory capacity at a significantly lower cost per bit than DRAM,” writes ZDNet.
Since this feature is still considered in the final development stage, such uses may be theoretical for the time being. The point stands that the ultra-fast IOPS of 3D Xpoint and the corresponding ultra-low latency mean that large-scale data enterprises will have to make crucial decisions on which hardware to buy and how to configure it within their overall architecture.
How data storage tiering will fundamentally change
Currently, enterprises have fewer options at their disposal compared to the coming months. Whether using HDDs or SDDs, both components are likely to run off of a SATA connection today, and transport data along a standard fabric, be it PCIe, fiber channel or what have you.
With the emergence of NVMe, SSDs suddenly present a drastic apples-and-oranges comparison to HDDs regarding practical uses. “Fresh” data will be entirely dependent on SSD systems with normal HDDs gradually being relegated to a legacy-based archiving system where low cost-per-terabyte is valued far above metrics like latency or IOPS.
Applications will need to respond in kind, knowing exactly when, why and how to bump data to a lower tier or transfer it to a more readily-available one through an automated process. Any kinks in this process could mean more than inefficiency; they can mean many hours of manual coding or data-dumping to rectify the disorganization.
All of this complexity will be completely new territory for many ISVs. Security solutions providers, for instance, will need to know which processes to delegate to low-latency NVMe systems and when to dump “cold” data into archiving, mass-storage HDDs. Thereby freeing up space for hot data in a faster tier of storage media. In the midst of the complexity will be a host of issues that will likely require an integrator partner in order to cut through the confusion, optimize solutions for various price points and bring products to market quicker without setbacks like unanticipated cooling issues.
UNICOM Engineering will be here during this exciting time to provide knowledge and support resources for solution providers of nearly any scale. We will detail our service offerings along with a deeper look at the issue in an upcoming whitepaper on data storage tiering. In the meantime, you can join our mailing list for updates, and take a look at our solutions design capabilities.
Here’s to an exciting future!