Sorry if this seems latency obvious, but… you can always scale out your storage with end-to-end NVMe – The Register

Comment Data storage is one of the most complex areas of IT infrastructure, as it needs to fit in with a range of conflicting requirements. Storage architectures have to be fast enough to meet the demands of users and applications, but without breaking the budget. They must deliver enough capacity to meet ever growing volumes of data while being reliable.

Not surprisingly, it turns out that most organisations still have a need for on-premises storage infrastructure, despite the lure of cloud-based storage services that promise to take away all the complexity and replace it with per-gigabyte pricing models. This is for various reasons, such as concerns over data governance, security worries or performance issues with online storage.

Whatever the reason, on-premises storage for most organisations is not going to disappear anytime soon. But it is facing new challenges as data volumes continue to expand, new workloads are being introduced into the enterprise portfolio, and users demand ever higher performance.

For small to mid-market businesses, on-premises storage has long been delivered by network-attached storage (NAS) and storage area network (SAN) platforms, both of which were developed to provide a shared pool of storage. However, while a NAS box attaches directly to a corporate Ethernet LAN and serves up files, a SAN may comprise multiple storage devices connected to servers via a dedicated network, often using high-speed Fibre Channel links.

Another key difference between the two is that while NAS exposes a file system to the network via protocols such as NFS or SMB, a SAN provides block-level storage that looks like a locally attached drive to servers on the SAN. As SAN and NAS has matured, some storage arrays offer unified block and file services in a single device. In this latter case, it is important that the storage platform you choose offers the ability to expand capacity seamlessly across both SAN and NAS environments as your business grows.

Enterprise workloads are evolving and calling for greater levels of performance. The volumes of data that organisations have to deal with also keep growing, and storage systems now have to serve applications such as analytics to make sense of all that data. Meanwhile, storage systems are also expected to deal with the introduction of new challenges such as DevOps, which brings a rapid cadence to the software development process and calls for the on-demand provisioning of new resources.

This demand for fast and easy access to data sets has driven the uptake of flash storage in the enterprise. First this took the form of hybrid storage arrays as a cache or hot tier front-end to a bunch of disk drives to accelerate read and write performance, but all-flash arrays started to gain market share as the relative cost of flash memory chips came down. IDC figures from the third quarter of 2019 show the all-flash array (AFA) market revenue was up 11.3 per cent year on year, for example.

Despite this, organisations are seeing that SAN and NAS systems may soon be too slow for some applications. Partly, this is because many flash-based solid state drives (SSDs) were manufactured with legacy host interfaces like SATA and SAS and came in disk drive form factors, in order to maintain compatibility with enterprise server and storage systems based on hard drives.

Because SSDs based on NAND flash are significantly faster than rotating hard drives, interfaces such as SAS and SATA are actually a bottleneck to the potential throughput of the drive. To fix this, SSD makers started to produce drives that used the PCIe bus, which offers higher speed and connects directly to the host processor in a server or storage controller, but there was no standard protocol stack to support this, which hindered the uptake of such solutions.

Fortunately, a group of storage and chip vendors got together and created Non-Volatile Memory Express (NVMe) as a new storage protocol optimised for high performance, designed from the ground up for non-volatile memory media.

NVMe includes a number of features that provide a significant improvement in I/O performance and reduced latency compared to legacy protocols. NVMe drives typically support four lanes of PCIe 3.0 each for a total of 4GBps bandwidth, compared with 12Gbps (about 1.5GBps) for SAS.

Another feature is support for multiple I/O queues up to 65,535 taking advantage of the internal parallelism of NAND flash storage. Each queue can also support up to 65,535 commands, compared with a single queue for SATA and SAS interfaces, supporting 32 and 256 commands respectively. This means that NVMe storage systems should be far less prone to the performance degradation that SAS and SATA can experience when heavily loaded with requests for data.

Perhaps equally importantly, NVMe allows applications to request data more or less directly from an SSD, while in the traditional storage protocol stack the commands have to pass through multiple layers on their way to the target drives. NVMe also provides a streamlined and simple command set that requires far fewer CPU cycles to process I/O requests than SAS or SATA, according to the NVMe standards body. This delivers higher IOPS per CPU instruction cycle and lower I/O latency in the host software stack. For the ultimate application solution performance, customers will want to architect end to end NVMe infrastructure where the host, network, and storage all incorporate NVMe technology.

Switching to NVMe thus speeds data transfers between the SSD and the host processor in a server, or in the case of a NAS box or SAN storage array, between the SSD and the controller. But this just shifts the bottleneck to the host interface or fabric that links the storage device to the systems it serves.

This is because a fabric such as Fibre Channel is essentially just a transport for SCSI commands, as is iSCSI (which transfers SCSI commands over TCP/IP, typically using Ethernet). However, fast your NVMe array is, data is still being sent across the network using a legacy storage protocol stack, with the additional latency that implies.

One answer to this is to extend the NVMe protocol across the network, using it over fabric technologies that are already widely used for storage purposes in enterprises, such as Fibre Channel, and Ethernet. This has collectively become known as NVMe over Fabrics (NVMe-oF).

NVMe-oF differs according to the fabric technology employed. Running it over Fibre Channel (FC-NVMe) is relatively simple for environments already invested in that infrastructure. For others such as Ethernet, this may necessitate the use of special network adapter cards supporting Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE).

This end-to-end implementation of NVMe allows enterprises to scale out their storage infrastructure without losing any of the low latency advantages offered by the protocol, effectively by allowing requested data to be transferred directly from the storage device into the computer systems memory by the network adapter, needing no further intervention by the host processor.

Naturally, there is a great deal of technical detail work in making all of this new technology hang together seamlessly, and organisations will likely find it beneficial to work with a vendor partner that can supply not just the storage, but servers with the necessary adapters and even network switches optimised for NVMe-oF to provide a true end to end NVMe solution. Partnering with one partner can also offer complete support and components validated to work together in an end-to-end solution.

In fact, the growing complexity of storage means that for many small to mid-market businesses, choosing a data platform that can simplify many processes will be vital. This may include intelligently placing data into storage media tiers or providing quality of service (QoS) capabilities to ensure that business-critical applications get the system resources they need.

Having previously stated that organisations still need on-premises storage, it is also true that many are looking to take advantage of cloud-based storage wherever it is advantageous to do so. One of the reasons for doing so is economics, with many cloud providers offering storage services that cost just pennies per GB per month.

While on-premises storage systems have the performance needed by applications, they can easily get cluttered up with data that is no longer in everyday use, so a key feature for any SAN or NAS platform is to enable this to be migrated to a lower cost storage tier such as a cloud storage service, when speed of access is no longer so important.

Some storage array platforms go beyond this to allow customers to deploy a cloud-hosted version of the product onto one or more of the big public clouds such as AWS or Azure. With this scenario, organisations can easily put in place data protection and business continuity strategies by using the cloud-hosted version to replicate snapshots and disaster recovery copies from their on-premises system.

This enables them to build high availability using the consumption-based expense model of the cloud instead of the cost of a physical second site. It is also important to avoid cloud vendor lock in and choose a solution that has the choice of multiple cloud providers.

Multi-cloud management is coming to the forefront of the cloud discussion due to escalating cloud storage costs and the ability to pick and choose multiple vendors gives customers more business leverage.

As technologies such as NAND flash and storage-class memories mature and become less costly, storage arrays based on them will eventually edge out those based on disk drives. NVMe has been designed to make the most of the capabilities these offer and with NVMe-oF now delivering these benefits end-to-end in the data centre, organisations should consider this a key technology to future proofing their storage infrastructure.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Follow this link:
Sorry if this seems latency obvious, but... you can always scale out your storage with end-to-end NVMe - The Register

Related Posts

Comments are closed.