Nebulon emerges with software-defined storage, but from the cloud – ComputerWeekly.com

Cloud-defined storage. Its a new departure, where a substantial chunk of storage controller management operations are offloaded to the cloud. But not input/output (I/O), which is handled by local PCIe-based cards that connect to form pools of storage.

The proposed benefits are that such an arrangement allows for improved management of storage at scale while cutting IT budget spend on costly controller-based hardware in the datacentre by 50% it is claimed.

Thats whats on offer from Nebulon, a Silicon Valley startup that plans general availability for its products from September.

The company proposes two things, in essence. First, capacity in partner-supplied servers with relatively cheap flash drives. Each server is equipped with a Nebulon PCIe card that offloads storage processing and which connects to other Nebulon cards in the datacentre.

Second, and here is where it gets more interesting, functionality around monitoring and provisioning that would normally form part of the controllers job is offloaded to the cloud.

The result is pools of block storage that are built from relatively cheap components and managed via a cloud interface.

Nebulon is something like hyper-converged (but with no storage overhead subtracted from local hardware) combined with software-defined storage (that runs in the cloud, although there is a local hardware element in the PCIe card).

The hardware component is a full-height, full-length PCIe card that fits in the server GPU slot, because thats where it will get the power and cooling it needs. These cards are called SPUs in Nebulon-speak (storage processing units). There is not storage on the card and the card appears to hosts as a SAS HBA storage networking card.

Unlike existing hyper-converged products, Nebulon can present volumes to virtualised or bare metal environments. As mentioned above, anything that runs locally runs on the SPU so theres no impact on the server CPU.

Up to 32 SPUs can connect via 10/25Gbps Ethernet to form a pool of storage (an Npod) that can be carved up into provisioned volumes, via the Nebulon ON cloud control plane.

Nebulon ON is where the topology is defined, storage provisioned, telemetry collected and management functions such as updates carried out. Should cloud connectivity be lost, storage continues to work as configured, with the local SPUs acting as controller cache to which configuration settings will have already been pushed.

Storage can be provisioned with application-aware templates that have suggested preset parameters for things such as number and size of volumes, levels of redundancy and protection and snapshot scheduling.

Replication is not set to feature until after general availability.

For now, Nebulon ON runs in the Amazon Web Services (AWS) and Google Cloud Platform (GCP) clouds.

The cloud control plane in Nebulon brings fleet management and the simplicity that comes with that, said Martin Cooper, solution architecture director at Nebulon. It brings distributed management at scale, and means you can run applications on the server that you intended to when you bought it.

Nebulon is intended to be run on a per-site basis to start with, although stretched clusters will be offered in forthcoming product revisions.

More here:
Nebulon emerges with software-defined storage, but from the cloud - ComputerWeekly.com

Related Posts

Comments are closed.