Category Archives: Cloud Storage

Tintri storage strategy shoots straight at public cloud – TechTarget

Tintri storage is taking a cloud approach with its VM-aware arrays. Co-founder and CTO Kieran Harty said Tintri is adapting an API-based web services approach to help IT organizations deal with "pressure from the CEO to do things differently" by applying the economics and scale of the cloud.

Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Harty was Tintri's original CEO until Ken Klein took over in 2013. Prior to launching Tintri, Harty spent seven years as an executive vice president of engineering at VMware, leading the delivery of ESX Server, VirtualCenter and VMware desktop virtualization products.

Tintri VMstore hybrid and all-flash arrays are based on a deep integration with VMware. The Tintri arrays operate at the virtual machine (VM) and disk level, replacing conventional file abstractions with VM-aware storage. Harty gave SearchCloudStorage an update on Tintri's emerging cloud strategy and planned product rollouts in 2017.

You say Tintri's cloud strategy is based on automated web services. What does it entail and how does it change the way people use Tintri storage arrays?

Kieran Harty: A lot of our customers have a huge impetus to do cloud initiatives. The public cloud model has taken off in a big way with our customer base. They want their existing infrastructure and applications to be available with the agility of the public cloud. They don't want to rewrite their applications or redo their compliance for a cloud environment.

Are you trying to avoid getting pigeonholed as a storage-only vendor?

Harty: We're still an all-flash array vendor. Obviously, our VMstore flash storage itself is invaluable. There's a lot of complexity associated with implementing good storage in terms of performance, cost and reliability. But storage is table stakes.

We're taking a web services approach that allows you to automate everything. The basis of web services is that everything is defined using APIs [to provide] the right level of abstraction.

How would a customer use Tintri storage to build an on-premises cloud?

Harty:Today, more than one-third of Tintri customers are using our platform to build their own cloud with varying degree of cloud capabilities. They use our web services architecture to automate common tasks like provisioning new virtual machines or applying Quality of Service policies, to scale-out while optimizing the location of every individual virtual machine and to apply predictive analytics to anticipate their future need for capacity and performance.

When we asked our customer advisory board a year ago whether they need connections from their own data center to public cloud (S3, Azure, etc), they said no. But there has been a transition within the last year where they decided they do need it. So I think it's going to be a gradual process.

More companies want a cloud implementation on premises behind their firewall. Does Tintri plan to emulate the Amazon cloud computing model?

We're trying to provide you [with] the agility of the public cloud, but within your own data center. Kieran Hartyco-founder and CTO, Tintri

Harty: Yes, we're taking a very similar approach to the enterprise environment. Compute has been a web service since VMware introduced its hypervisor products. You also have Microsoft Hyper-V and containers emerging. Network is becoming a service with things like VMware NSXand Cisco's ACI (Application Centric Infrastructure). But storage has not [developed as a service] at the same pace.

Most storage vendors aren't using a web services approach. They design their architectures using the same physical concepts you have within the traditional data center, where you create LUNs and volumes and have people provision and interpret the stats associated with a particular VM. The model we have is one in which all your workflows are fully automated, including the storage, compute and networking that is provided by other vendors. We're trying to provide you [with] the agility of the public cloud, but within your own data center.

How quickly will you roll out support for public clouds?

Harty: We plan to start integration with AWS later this year, which will give you the ability to send and retrieve granular VM snapshots with Amazon. Gartner is using the term 'cloud-inspired infrastructure' to describe this uptake and it's very consistent with what we see as well. We're starting with AWS because it's the big gorilla. We'll add Azure support if we see a customer demand for it.

What concerns do you have about the recent AWS outage? What redundancies does your cloud strategy include to ensure customers have access to their Tintri storage?

Harty: The AWS outage validates our belief that customers should have a multifaceted cloud strategy. We allow customers to have data in AWS and [also] have data on premises via Tintri enterprise cloud. In the case of an outage, Tintri storage would still be able to offer data on premises via snapshots. Tintri has the ability to replicate data to multiple different sites. If you care about the availability of your data, you can replicate it to multiple sites as well as to AWS, which will give you high availability even during an outage.

What new Tintri storage products are you developing as part of the virtualization strategy?

Harty: The basis for what we do is everything being done in automated fashion. You can create a snapshot on a VM that talks to us, the storage vendor. You can set [Quality of Service] on the VM which talks to us, the storage vendor. And you can integrate those workflows in an automated, self-service way. Very distinct in terms of the capabilities we provide. And it's based on the concept that everything we do is based on a web service.

We've introduced support for VMware vRealize Orchestrator that allows you to orchestrate more general storage workflows. We've had asynchronous replication, but we just introduced synchronous replication capabilities. That allows you to have a disaster recovery capability where all the data is sent to a remote site. You can have two VMstore arrays up to 60 miles apart. Data gets written to both arrays. If there's a failure of the primary array, the secondary array takes over.

Scale, performance remain key components of cloud storage

Maximizing cloud storage services

Tintri leads the way with VM-aware technology

Read more:
Tintri storage strategy shoots straight at public cloud - TechTarget

When Amazon’s cloud storage fails, lots of people get wet – Savannah Morning News

NEW YORK Usually people dont notice the cloud unless, that is, it turns into a massive storm. Which was the case Tuesday when Amazons huge cloud-computing service suffered a major outage.

Amazon Web Services, by far the worlds largest provider of internet-based computing services, suffered an unspecified breakdown in its eastern U.S. region starting about midday Tuesday. The result: unprecedented and widespread performance problems for thousands of websites and apps.

While few services went down completely, thousands, if not tens of thousands, of companies had trouble with features ranging from file sharing to webfeeds to loading any type of data from Amazons simple storage service, known as S3. Amazon services began returning around 4 p.m. EST, and an hour later the company noted on its service site that S3 was fully recovered and operating normally.

THE CONCENTRATED CLOUD

The breakdown shows the risks of depending heavily on a few big companies for cloud computing. Amazons service is significantly larger by revenue than any of its nearest rivals Microsofts Azure, Googles Cloud Platform and IBM, according to Forrester Research.

With so few large providers, any outage can have a disproportionate effect. But some analysts argue that the Amazon outage doesnt prove theres a problem with cloud computing it just highlights how reliable the cloud normally is.

The outage, said Forrester analyst Dave Bartoletti, shouldnt cause companies to assume the cloud is dangerous.

Amazons problems began when one S3 region based in Virginia began to experience what the company called increased error rates. In a statement, Amazon said as of 4 p.m. EST it was still experiencing errors that were impacting various AWS services.

We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue, the company said.

WHY S3 MATTERS

Amazon S3 stores files and data for companies on remote servers. Amazon started offering it in 2006, and its used for everything from building websites and apps to storing images, customer data and commercial transactions.

Anything you can think about storing in the most cost-effective way possible, is how Rich Mogull, CEO of data security firm Securosis, puts it.

Since Amazon hasnt said exactly what is happening yet, its hard to know just how serious the outage is. We do know its bad, Mogull said. We just dont know how bad.

At S3 customers, the problem affected both front-end operations meaning the websites and apps that users see and back-end data processing that takes place out of sight. Some smaller online services, such as Trello, Scribd and IFTTT, appeared to be down for a while, although all have since recovered.

The corporate message service Slack, by contrast, stayed up, although it reported degraded service for some features. Users reported that file sharing in particular appeared to freeze up.

The Associated Press own photos, webfeeds and other online services were also affected.

TECHNICAL KNOCKOUTAGE

Major cloud-computing outages dont occur very often perhaps every year or two but they do happen. In 2015, Amazons DynamoDB service, a cloud-based database, had problems that affected companies like Netflix and Medium. But usually providers have workarounds that can get things working again quickly.

Whats really surprising to me is that theres no fallback usually there is some sort of backup plan to move data over, and it will be made available within a few minutes, said Patrick Moorhead, an analyst at Moor Insights & Strategy.

AFTEREFFECTS

Forresters Bartoletti said the problems on Tuesday could lead to some Amazon customers storing their data on Amazons servers in more than one location, or even shifting to other providers.

A lot more large companies could look at their application architecture and ask how could we have insulated ourselves a little bit more, he said. But he added, I dont think it fundamentally changes how incredibly reliable the S3 service has been.

Read more from the original source:
When Amazon's cloud storage fails, lots of people get wet - Savannah Morning News

Box revenues near $400m as cloud storage demands grow – www.computing.co.uk

Box has announced revenues of $398.6m for its most recent fiscal year, an increase of 32 per cent on its previous financial year, as demand for its cloud storage and content services continues to grow.

Despite the increase in revenues the firm still posted a loss of $150m, although this was an improvement on the $210m it lost in fiscal 2016. Notably, Box also posted its first free cash flow quarter in Q4 at $10m, a notable milestone for any youngish company.

Investors will have noted this with mild optimism, as well as that the fact billings rose by 23 per cent to $454m in fiscal 2017, as it demonstrates that more firms are willing to pay for access to its cloud storage platform and the tools it offers to work with, and share, this data.

Indeed Box said it now has a paying customer base of over 71,000 firms, including high-profile brands such as Volkswagen Group of America, Discovery Communications, and Spotify.

Aaron Levie, co-founder and CEO of Box was upbeat on the results and said it demonstrated the firm's focus was paying off and if it continues will see it reach the milestone $1bn revenue target.

"Box is raising the bar in cloud content management. We've consistently delivered innovative new products, set the standard for security and compliance, and helped customers in every industry move to the cloud with confidence," he said.

"We are driving towards a $1 billion long-term revenue target, and this year we plan to invest for scale while continuing to drive operating leverage."

Levie also took to Twitter to tout the fact the company became free-cash-flow positive in its fourth quarter and that revenue for its next fiscal year is guided to pass $500m.

Read the original post:
Box revenues near $400m as cloud storage demands grow - http://www.computing.co.uk

Update: AWS cloud storage back online after outage cripples popular sites – GeekWire

(Shutterstock Image).

Anoutage of Amazon Web ServicesSimple Storage Service (S3) impacted sites across the internet Tuesday. Continue reading for our story and updates as events unfolded.

UPDATE: 2:15 p.m.: Amazon Web Services has fixed all the issues related to the cloud storage outage Tuesday. Here is the latest message from the service health dashboard: As of 1:49 p.m. Pacific, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally.

UPDATE 1:20 p.m.: The online world is starting to spin again as Amazon is fixing the problems that caused high error rates at its data centers in Virginia, knocking out many prominent websites and apps Tuesdaymorning. Here is the latest update: S3 object retrieval, listing and deletion are fully recovered now. We are still working to recover normal operations for adding new objects to S3.

UPDATE 1:00 p.m.: It appears Amazon is close to fixing the problem as it posted the following message on the service health dashboard with expectation that the number of errors are expected to decrease in the next hour: We are seeing recovery for S3 object retrievals, listing and deletions. We continue to work on recovery for adding new objects to S3 and expect to start seeing improved error rates within the hour.

UPDATE: The outage remains ongoing, butAmazon Web Services has fixed the issue with the service health dashboard, which was previously not showing the Simple Storage Service outage. The company also believes it has identified the cause of the outage.

Here is the latest update from AWS:

Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue.

Here is a look at the now updated service health dashboard, which displays all the incidents.

Original story below.

Amazon Web ServicesSimple Storage Service, its cloud storage program, started experiencinghigh error rates atdata centers in Northern Virginia just before 10 a.m. Pacific Tuesday morning, knocking down service to many of the countless websitesand applicationsthat use AWS such as Expedia, Slack, Medium and the U.S. Securities and Exchange Commission.

The outage appears to be affecting the AWS service health dashboard, as it shows no outages currently. Amazon adjusted by posting a special message at the top of the site that reads as follows:Were continuing to work to remediate the availability issues for Amazon S3 in US-EAST-1. AWS services and customer applications depending on S3 will continue to experience high error rates as we are actively working to remediate the errors in Amazon S3.

In an ironic twist, some websites dedicated to telling people when sites are down are down.

Amazon is the king of cloud computing, capturing more than 40 percent of the market, according to a recent report.AWStopped $12 billion in sales for the year,up 55 percent from the same period last year, blowing past a goal of reaching $10 billion in sales in 2016.

That said, the internet is not happy with the outage and people are ventingtheir frustrations on Twitter. Others seem to be embracing the outage, terming it a digital snow day.

More here:
Update: AWS cloud storage back online after outage cripples popular sites - GeekWire

Nimble: Just as well our cloud storage runs in our own cloud, eh , eh? – The Register

Explainer Nimbles Cloud Volumes (NCV) store block data for use by Amazon or Azure compute instances, but the NCVs themselves are not stored in either Amazons Elastic Block Store or in the Azure cloud.

With remarkable timing Nimble made these claims just hours before the S3 outages, which had knock-on effects for EBS and other services the storage contender claimed the two cloud giants' infrastructure does not have the availability or reliability needed. Nimble staffer Dimitris Krekoukias quoted Amazon EBS documentation as an example to justify this stance:

He claimed: Every single customer Ive spoken to that has been looking at AWS had never read that link I posted in the beginning, and even if they had they glossed over the reliability part.

Krekoukias, blogging as RecoveryMonkey, claimed the following about ABS and Azure block storage:

Nimble says its data centre transactional applications, which use block storage, are inhibited from moving to cloud it can't guarantee, saying Nimble has built its own cloud to deliver the Nimble Cloud Volumes service.

Customers can choose a capacity, performance and backup SLA, and attach to either AWS and/or Azure. And "The users never see or touch a Nimble system in any way. All they see is [our] easy portal."

There are six "nines" storage uptime and data integrity is millions of times more than what native cloud block storage provides. Really?

In a separate blog Krekoukias writes: Nimble creates a checksum and a self-ID for each piece of data. The checksum protects against data corruption. The self-ID protects against lost/misplaced writes and misdirected reads

He says that, as well as block-level checksums, the Nimble storage does multi-level checksums:

Add this to the triple+ parity RAID scheme Nimble uses and Krekoukias thinks he is justified in saying NCVs are more than a million times more durable than EBS or Azure block storage.

He says that, with database IOs prioritised over low-latency sequential IOs, Nimble offers an IOPS SLA with its Cloud Volumes.

Krekoukias said Nimble is "still working on pricing but the monthly commitment is $1,500. The minimum customer commitment is 1 month. The minimum volume commitment is 1 day."

"So, a customer that's already paying $1,500 could create a huge volume temporarily to test something and then delete it in a few hours. We will only charge them for that day of use."

The Register has contacted Amazon for comment about the claims.

Nimble has built a public block storage cloud service, which means quite some investment in facilities and software. Obviously it thought this was the best, if not the only, way to get on-premises transactional block data availability and durability levels up to mission-critical type levels. That way it can continue to sell its storage facilities, on a cloud usage basis and integrate with its on-premises gear in a hybrid cloud model.

Of course its users are locked in to NCVs but, with NCV availability and durability being, as far as we know, unique in the public cloud arena that will be a trade-off its customers are willing to make.

There are alternative public clouds for backup data, like those from Backblaze and Carbonite, but block storage is quite another matter.

This Nimble public cloud block storage is certainly an individual marketing tactic and we wonder if other on-premises storage array suppliers will do the same thing. Wed point out that, as far as we know, no other stand-alone storage supplier is doing this. There are IBM and Oracle with their public clouds but these are system-level offerings. Dell (EMC), HDS, HPE and NetApp are not doing what Nimble has settled on.

It is, literally, a nimble offering. Were surprised, and say its great to see a small player shake up the cloud block storage market. Lets hope it builds up a sufficient customer base for it to withstand whatever pricing hammer blows Amazon might send its way in the future.

Read more:
Nimble: Just as well our cloud storage runs in our own cloud, eh , eh? - The Register

New solution guarantees 100 percent uptime for private cloud storage – BetaNews

Whether public or private, one of the key factors businesses consider in choosing a cloud service is to ensure maximum availability.

Cloud storage specialist Scality is announcing its new HALO Cloud Monitor, a 24/7 solution to provide customers continuous uptime for their managed private cloud storage environments.

Designed to work with the Scality RING object storage platform, HALO continuously monitors customer environments in real time and provides predictive analytics to ensure storage systems are performing optimally. Comprehensive dashboards offer diagnostic metrics, monitoring system level statistics, component processes, memory, disk and many other key elements. It provides, user-friendly visualization of events, proactive fault and incident detection, configuration assistance and system health checks.

The HALO system uses smart learning, employing previous system behavior to define predictive range key performance indicators. These KPIs can then detect changes in the storage environment before they become problematic. Automatic alerts are triggered to notify key personnel who can then proactively respond and maintain continuous uptime.

"Our Scality RING system is designed to be 100 percent available and now with Scality HALO we have the cloud monitoring assurance for customers to guarantee 100 percent uptime for their Scality RING and S3 environments," says Daniel Binsfeld, VP of DevOps and Global Support at Scality. "Our customers, both service providers and enterprise companies, must deliver on strong service level agreements to their users. With Scality HALO we provide peace of mind and confidence that downtime can become a thing of the past for everyone."

HALO comes in two program levels, a fully-featured DCS edition and a Standard version. The Standard version is available to all Scality customers for free and includes up to 15 diagnostic metrics. Scality HALO premium with a 100 percent availability guarantee is available to Scality Dedicated Care Service customers through the company's alliance partners and global network of ISVs and resellers.

For more information visit the Scality website.

Photo Credit: Sakonboon Sansri/Shutterstock

Read this article:
New solution guarantees 100 percent uptime for private cloud storage - BetaNews

Nimble offers Nimble Cloud Volumes all-flash cloud storage – ComputerWeekly.com

Nimble Storage has announced the availability of Nimble Cloud Volumes, a Nimble-run cloud storage service that allows customers to provision cloud-based capacity to compute instances in the Amazon or Microsoft clouds or for their on-premise compute.

What to move, where and when. Use this checklist and tips for a smooth transition.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Using the service, Nimble customers will be able to provision, via a Nimble web portal, block storage volumes that physically reside on on Nimble-owned all-flash infrastructure, initially on the east and west coast of the US, but later to be extended elsewhere.

Cloud storage volumes available in gold, silver and bronze service levels created by customers are then presented as storage for Amazon Web Services (AWS) and/or Microsoft Azure cloud compute instances.

Features available to Nimble cloud customers will include backup and recovery, cloning and the ability to use the service for burst workloads.

Nimble pre-sales technical consultant Rich Fenton said customers can expect sub-millisecond latency with six-nines availability.

We typically find customers reluctant to use the cloud because the resiliency is not enterprise grade, he said. Its for customers that want persistency in cloud storage, that they wouldnt have been able to find in the past, and for those that are concerned about cloud supplier data lock-in.

Fenton said likely use cases for Nimble Cloud Volumes would be web hosting, email, collaboration, customer relationship management (CRM), enterprise resource planning (ERP) and HR apps.

A key advantage touted by Nimble is that data can easily be ported between cloud services. It is often a concern of potential cloud customers that data that resides with one cloud provider might not be easily moved to another. Nimble guarantees that will not be the case with its Cloud Volumes.

Customers will potentially be able to use Nimble Cloud Volumes as tiers of storage in conjunction with on-premise capacity but there will be no automated tiering capability between them.

Nimble Cloud Volumes will initially cost $0.10 per gigabyte a month.

Read more:
Nimble offers Nimble Cloud Volumes all-flash cloud storage - ComputerWeekly.com

Apple, Microsoft and Amazon offer fairer deal on cloud storage – Networks Asia

Apple, Microsoft and Amazon have agreed to give cloud storage subscribers fairer contracts after intervention by the U.K.'s Competition and Markets Authority.

Such cloud storage services are typically used to store photos, videos, music or digital copies of important documents.

If the services shut down or vary their capacity or prices without notice, customers can lose their data, or be held hostage.

The CMA asked the storage service providers to give adequate notice before closing, suspending or changing services, and to allow customers to cancel their contracts and receive a pro-rata refund if they didn't accept service changes.

The regulator last year obtained similar undertakings from Google, Dropbox and five other cloud storage providers.

The CMA estimates that three in 10 British adults store personal data in the cloud, the majority of them using free services.

The cloud storage providers made the changes to their terms and conditions voluntarily, thus avoiding enforcement action by the CMA. The regulator said it was ending an investigation into cloud storage begun in December 2015.

Amazon's European subsidiary, Amazon Media EU, agreed among other things to ensure that price increases do not take effect during a consumers fixed contract term, and to clearly and narrowly define the circumstances in which Amazon may suspend or terminate the contract or service.

Apple subsidiary Apple Distribution International said it would give consumers 30 days to remedy "non-material" breaches of contract before their service was cut off, and would allow them to cancel within 14 days after renewal of a fixed-term contract.

As for Microsoft, it said it would provide advance warning if it intended to shut a OneDrive user's account for going over their storage allowance, and generously promised not to shut OneDrive accounts for inactivity as long as they were fully paid up.

That last will come as a relief to anyone using their OneDrive account to back up infrequently changed data.

Read the original here:
Apple, Microsoft and Amazon offer fairer deal on cloud storage - Networks Asia

Apple, Microsoft and Amazon offer fairer deal on cloud storage | CIO – CIO

Thank you

Your message has been sent.

There was an error emailing this page.

Apple, Microsoft and Amazon have agreed to give cloud storage subscribers fairer contracts after intervention by the U.K.'s Competition and Markets Authority.

Such cloud storage services are typically used to store photos, videos, music or digital copies of important documents.

If the services shut down or vary their capacity or prices without notice, customers can lose their data, or be held hostage.

The CMA asked the storage service providers to give adequate notice before closing, suspending or changing services, and to allow customers to cancel their contracts and receive a pro-rata refund if they didn't accept service changes.

The regulator last year obtained similar undertakings from Google, Dropbox and five other cloud storage providers.

The CMA estimates that three in 10 British adults store personal data in the cloud, the majority of them using free services.

The cloud storage providers made the changes to their terms and conditions voluntarily, thus avoiding enforcement action by the CMA. The regulator said it was ending an investigation into cloud storage begun in December 2015.

Amazon's European subsidiary, Amazon Media EU, agreed among other things to ensure that price increases do not take effect during a consumers fixed contract term, and to clearly and narrowly define the circumstances in which Amazon may suspend or terminate the contract or service.

Apple subsidiary Apple Distribution International said it would give consumers 30 days to remedy "non-material" breaches of contract before their service was cut off, and would allow them to cancel within 14 days after renewal of a fixed-term contract.

As for Microsoft, it said it would provide advance warning if it intended to shut a OneDrive user's account for going over their storage allowance, and generously promised not to shut OneDrive accounts for inactivity as long as they were fully paid up.

That last will come as a relief to anyone using their OneDrive account to back up infrequently changed data.

Peter Sayer covers European public policy, artificial intelligence, the blockchain, and other technology breaking news for the IDG News Service.

Sponsored Links

Read more from the original source:
Apple, Microsoft and Amazon offer fairer deal on cloud storage | CIO - CIO

DLT Solutions Partners with Pure Storage to Deliver All-Flash Based … – Business Wire (press release)

HERNDON, Va.--(BUSINESS WIRE)--DLT Solutions (DLT), an award winning, public sector technology leader, and Pure Storage (NYSE: PSTG), the markets leading independent solid-state array vendor, today announced a partnership. By joining the Pure Storage Partner Program (P3), DLT Solutions and Pure Storage can accelerate and de-risk the critical agency initiatives across the public sector. Pure Storages all-flash based technology helps address governments modernization efforts to achieve results that drive operations forward like going mobile, transforming to cloud IT, or unlocking insights with analytics.

The simplicity, automation and resiliency provided by Pure Storage are all essential to our public sector customers as they look to cut costs, and reduce power and space requirements despite growing data sets, said Jim Propps, VP, Enterprise Platforms and Enterprise Data Management. Pure Storage will be a strong partner to enhance our data management offering and help keep DLT on the cutting edge of industry technology as they modernize the storage market.

Pure Storage's FlashArray//M delivers the simplicity, ease of implementation, power efficiency, security and performance needed to run data intensive applications within federal agencies. The Pure Storage FlashArray, now available on DLTs GSA Schedule, also meets the availability, reliability and scalability requirements of government agencies while users experience as much as 10x the performance of disk drives with easy set-up and operation, data-at-rest encryption and Rapid Data Lock technology for all-inclusive security.

"IT departments have to get the most out of every dollar spent on technology and ensure they are improving business outcomes," said Michael Sotnick, VP of Global Channels and Alliances, Pure Storage. "This is especially true for government agencies who are held accountable by their citizens. We are thrilled to work with DLT to ensure that these agencies have an innovative, secure and efficient storage solution.

Pure's FlashArray//M has earned the National Information Assurance Partnership (NIAP) Common Criteria Certification (Network Device Protection Profile, v1.1). This certification validates that products in the Pure Storage FlashArray portfolio meet the stringent testing and technical requirements for security mandated by the NSA in alignment with DLTs best-in-class solutions enabling the public sector to meet compliance and streamline the procurement process.

About DLT Solutions

DLT is a leading technology partner to the federal, state and local government, education, utilities and healthcare markets. For more than 25 years, the companys dedication to helping the public sector make smart technology choices and simplify their technology procurements ensures its customers have the best options forCybersecurity,Cloud,Application Lifecycle,Digital Design,IT ConsolidationandIT Managementsolutions. The DLT advantage includes strategic partnerships withindustry leading and emerging technology companies- including Amazon Web Services, Autodesk, ForeScout, Google, Informatica, Intel Security, Oracle, Quest Software, Red Hat, SolarWinds, Symantec and Veritas - whose products and services can be easily procured through DLT by leveraging itsbroad portfolio of government IT contractsincluding, GSA, SEWP V, U.S. Communities and Texas DIR. To learn more, visit DLTsResource Center, call 800.262.4358 or emailsales@dlt.com. Also on LinkedIn and Twitter (@DLTSolutions).

See more here:
DLT Solutions Partners with Pure Storage to Deliver All-Flash Based ... - Business Wire (press release)