Hitachi Content Platform 8 focuses on cutting storage cost – TechTarget

Hitachi Data Systems' latest object storage upgrade includes revamped licensing and denser nodes to try and convince customers that on-premises cloud storage can cost less than putting data into a public cloud.

Hitachi Content Platform (HCP) 8 launched today, supporting 10 TB helium hard disk drives (HDDs), geographically distributed erasure coding, and KVM hypervisors along with new licensing.

HDS claimed the new HCP version lowers storage costs by enabling 67% higher capacity per storage node and supporting 55% more objects than in the past. Customers can store 1.25 billion objects per node if they use 800 GB mirrored solid-state drives (SSDs) to manage the HCP object metadata database, according to Tim Desai, a senior product marketing manager at HDS.

HDS said the new licensing offers customers the option to pay the lower price between data ingested and actual capacity consumed. The price difference depends on the ratio of deduplication and compression of the ingested data set.

"HDS is looking to dispel the myth that public cloud storage is always cheaper than on-premises cloud storage," said Steven Hill, a senior storage analyst at 451 Research. "Unstructured data is becoming a huge problem for business, and object storage offers the metadata capabilities to help with that."

Object-based HCP is Hitachi's main platform for building private clouds.

"We are seeing HCP evolve to this cloud platform, which is why we're so focused in all the enhancements around optimizing costs for cloud infrastructure," said Tanya Loughlin, director of content cloud and mobility product marketing at HDS. "Erasure coding is your best bet for massive petabyte-scale cloud deployments."

A six-site geo-distributed erasure coded configuration, using the new 10 TB Seagate helium-based SAS HDDs with compressed and deduplicated data, could bring a 400% increase in storage per cluster, Desai said.

HCP object storage is sold through an appliance-based package, a software-only version for VMware's ESXI or KVM, or as a managed service through a cloud-based pay-as-you-go model.

The former licensing model offered three options: "active" for direct-attached array-based capacity; "economy" for the high-volume, low-cost tier with storage server nodes; and "extended" for capacity under management for data tiered off to the cloud or other storage nodes.

HDS is replacing the active array and economy storage server (S node) licenses and with basic and premium options, while keeping the extended license for moving content to the cloud.

Under the new pricing model, HDS will sell new licenses in units of 1 TB of usable storage. A basic license costs 20% less than the premium option and is designed for a single tenant and management plane and includes 10,000 namespaces, REST-only protocols, a metadata database, geo-distributed erasure coding, replication, compression and deduplication.

The HCP premium license supports 1,000 tenants and management planes and all basic features plus REST and non-REST protocols, metadata indexing and search capabilities, legal hold/shredding/retention and SAN storage (zero copy failover). An extended license inherits properties from the premium or basic options and protects metadata only.

Geo-distributed erasure coding facilitates data protection across regions with the potential to reduce build times and use less storage capacity than replication. HDS supports a 20+6 erasure code configuration, where each part of an object is encoded into 20 data fragments and six parity fragments distributed across six storage nodes or sites. The object can be recovered if six data fragments are lost or unavailable due to a large-scale outage or other catastrophic event.

"There are many object storage solutions that support geo-distributed erasure coding. It's becoming more and more of a table-stakes feature for customers," Desai said. "Even if they're not going to use it, they've heard about enough that they want to see in any solution that they're considering."

Hitachi Content Platform previously supported only RAID-based data protection and local erasure coding. Desai said customers used RAID when attaching a SAN to HCP, but they're increasingly using HCP with S nodes, which have built-in erasure coding for data protection. He said customers could move to geo-distributed erasure coding without downtime or a forklift upgrade.

"If you have more than two sites, you're going to begin to see cost savings with geo-distributed erasure coding. When you get up to six sites, it's substantially more," Desai said. He noted that HDS works large media and entertainment companies that get faster rebuild times with large files such as movies using erasure coding.

Support for the open-source KVM hypervisor gives HPC users a less expensive alternative and a dedicated management port to separate user and administrator network traffic for added security.

HDS also recently updated the Hitachi Data Ingestor (HDI) cloud/file gateway and HCP Anywhere file sync and share product that are also part of the HCP product portfolio, along with the Hitachi Content Intelligence search-and-analytics option launched in November. Desai said about half of the HCP customers own more than one portfolio product.

HDI improvements include new multipart file transfer capabilities to enable faster uploads of large files to Hitachi Content Platform. HCP Anywhere enhancements include a next-generation Windows client for cloud home directories and virtual desktop infrastructure and user interface improvements for mobile applications and Web portals.

Object storage can reduce storage complexity

Guidance on building a private storage cloud

Object storage protects against ransomware

Read this article:
Hitachi Content Platform 8 focuses on cutting storage cost - TechTarget

Related Posts

Comments are closed.