Category Archives: Cloud Storage
MapR embraces cloud storage for file management & containers – Computer Business Review
Add to favorites
MapR-XD is designed to eliminate data silos and support new data processing workloads from the edge, data centre and to the cloud.
MapR is moving to more embrace cloud technology with the release of a cloud-scale data store to manage files and containers.
The MapR-XD fits into the companys Converged Data Platform and is said to support any type of data type from the edge to the data centre and multiple cloud environments with automatic policy-driven tiering from hot, warm, or cold data.
The company said that the new product gives customers the ability to create vast, global data fabrics which are ready for analytical and operational applications, essentially making it easier to use data.
As applications become more intelligent and take advantage of more diverse data in real time, for both analytical and operational uses, there arises the need for new approaches to data processing, said Matt Aslett, research director, data platforms and analytics, 451 Research. MapR-XD is designed to eliminate data silos and support new use cases as they emerge that require data processing from the edge, data centre and to the cloud.
Included in the MapR-XD is files and container support with the product said to eliminate data silos and simplify management across files and containers. It is also said to be able to scale to support trillions of files and exabytes of data.
MapR-XD also aims to make the most of the power of network interconnects and is said to take advantage of the underlying heterogeneous hardware. On this front it offers automated capabilities such as logical partitioning, parallel processing for disparate workloads, and bottleneck avoidance with I/O shaping and opimisations.
The product also includes an optimised container client that support both legacy and new containerised event-based microservices applications, containers, database and event streams, and works with multiple schedulers such as Kubernetes, Mesos, and Docker Swarm.
On the cloud front the product supports edge, on-premises, and cloud environments and allows for multi-temperature capabilities across flash, disk, and cloud tiers.
See the original post here:
MapR embraces cloud storage for file management & containers - Computer Business Review
Use Azure Managed Disks to simplify cloud storage management – TechTarget
Earlier this year, Microsoft released Azure Managed Disks, a new feature that simplifies storage management for...
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
cloud admins -- and offers other benefits around availability, image capture and more.
The challenge with unmanaged disks is that they get complicated quickly. Each cloud storage account has limits in terms of overall capacity and how many IOPS are supported. This model means that admins likely need to create multiple storage accounts when they deploy multiple VMs. On top of that, administrators have to ensure that they don't exceed capacity limits and that each storage account can provide the IOPS required by all the disks within the account.
Azure Managed Disks simplify this process. They provide an abstraction layer that prevents the need for admins to create and manage storage accounts -- and their limits -- for virtual hard disks (VHD).
In addition to not having to worry about exceeding storage account limits, here are three more benefits of Azure Managed Disks:
To prevent single points of hardware failure, admins place VMs into availability sets. For example, they can put a load-balanced fleet of web servers into an availability set with multiple fault domains to ensure each VM runs on independent hardware. However, unmanaged disks don't provide that same guarantee. It's possible that all the disks for each VM in an availability set could end up in storage accounts placed on the same storage unit. Therefore, there's a potential single point of failure for storage when you use unmanaged disks, even when you place the VMs in an availability set.
Azure Managed Disks help eliminate that potential single point of failure for VM storage; they ensure that VMs in an availability set will use virtual disks placed on separate storage units.
VM images are great to speed up deployments. Once an image is built, admins can deploy servers repeatedly, using the same configuration and settings.
To capture unmanaged disk images, admins must use a command-line interface (CLI). With Azure Managed Disks, admins use a simple user interface in the Azure portal to capture an image. A managed image also includes managed data disks connected to a VM, so admins can capture both the managed OS disks and data disks as part of the process. Once admins capture a managed image, they can deploy new VMs based on that image without having to create new storage accounts or copy VHD files.
In addition to images, administrators can take independent snapshots of Azure Managed Disks. This allows them to make a point-in-time copy of an individual disk and to perform point-in-time recovery for data. With independent snapshots, admins can delete the parent disk, but the snapshot can persist for however long it's needed.
Admins can also use these snapshots to rebuild VMs from scratch. For example, if they require a point-in-time recovery for a VM, admins can create a new VM using the new managed disk based on a point-in-time snapshot. Admins can even place independent snapshots in a globally redundant storage account for disaster recovery.
Managed Disks offer performance tiers for Standard (hard disk drive) and Premium (solid-state drive) storage. To get started, select the Azure Managed Disks option when you create a VM in the portal, as shown in Figure 1.
In addition to provisioning VMs with Azure Managed Disks in the portal, admins can use PowerShell, the Azure CLI and Azure Resource Manager templates for automated deployments.
If you already use unmanaged disks and your VMs are in a region that supports Azure Managed Disks, you can migrate to the new feature. The process entails the use of a PowerShell command to perform the conversion after the VM is deallocated. Microsoft provides a collection of scripts on how to perform the migration. Keep in mind that the Azure Managed Disks feature is still new and is currently not available in the Azure Government regions.
Review what storage types Azure offers
Navigate the challenges of Azure Premium Storage billing
Break down the various Azure instance types
Continue reading here:
Use Azure Managed Disks to simplify cloud storage management - TechTarget
Cloud on your horizon? What managers should ask about data storage – Rochester Business Journal
Data storage in the cloud has become a mainstay for companies in all industries of all sizes. Yet as more and more valuable data moves to the cloud, how can businesses stay one step ahead of the security risks?
The 2017 State of the Cloud Report, published by the Santa Barbara, Calif.-based cloud-computing provider RightScale Inc., shows 95 percent of businesses make use of the cloud in some form.
There are reasons why the cloud is an appealing place to store data. For one, a third-party provider has a broader perspective of cyberthreats that can help protect your business, said Dan Shugrue, director of product marketing at Akamai Technologies Inc., a Cambridge, Mass.-based cloud-service provider.
Lets say you have an e-commerce site and prefer to touch your data yourself. You can be excellent at your job and notice when the latest threat comes in and take measures to block it, Shugrue said. But you cant see who is threatening other companies in your industry, and you cant put protections in place before they strike.
Then there is the cost incentive. Consumer-grade cloud storage, such as Google Drive or a basic Dropbox account, comes free of charge.
I see a lot of small companiesusing cloud storage because it keeps your overhead down real low, said Mark McLarnon, chief technology officer of CyberPoint International, a cybersecurity firm based in Baltimore. Youre pushed, youre almost enticed to use things like (Microsoft) OneDrive. Even with the free versions, you get a lot of storage. But whats overlooked is the risk of unauthorized access.
Aaron Newman, co-founder and CEO of CloudCheckr Inc., a software firm based in Rochester, does not see the exodus of businesses moving to the cloud as a passing fad.
Its an undeniable movement that people are going to move to the cloud, he said. In 10-20 years from now, 90 percent of the data, 90 percent of all computing resources is going to be in the cloud. Anyone thats putting up a new service or application today, its all done in the cloud.
What are you storing?
The consumer-grade technology tends to be free, but users are paying for it with a lack of privacy, said Donald Spicer, associate vice chancellor and chief information officer of the University System of Maryland.
If you have a consumer-grade Google account, there is no limitation about what Google is able to look at within your private account. None of that is true when an enterprise pays for a service. Its all protected, and its quite a different service, he said.
For that reason, upgrading to a paid cloud provider is worth the investment for many businesses because you are more likely to keep your data safe from competitors or other intruders, Spicer said.
While there is still on-premise data storage on its campuses, the University System of Maryland does rely heavily on the cloud. For example, it has moved to software-as-a-service solutions for its human resources and financial functions, Spicer said.
Data stored in the cloud is secured by a variety of measures. For example, data has to be encrypted both in transit and at rest to prevent unauthorized access. The university system also has a cybersecurity council with representation from all its institutions.
CloudCheckrs Newman believes businesses need to think of the cloud as secure. The problem is many businesses do not.
Youre much better off putting your data in the cloud, he said. If youre putting your data on a (cloud) system like Salesforce.com or some other system, those vendors are very diligent about fixing security holes. Youre never going to have perfect security you still have threatstheyre just way more diligent about it.
I think moving to the cloud actually makes your stuff more secure, he added.
Ultimately, especially for institutions with personal or confidential data to protect, it is important to stay on top of which setting is safest. That is the approach followed by Jefferson Health in Philadelphia.
Being in health care, we have to really look at the sensitivity and criticality of some of the data, said Robert Dalrymple, director of information systems and technology security and enterprise information security officer for Jefferson Health and Thomas Jefferson University. Some data and information, we may not want to have in the cloud, although we can have private and dedicated cloud situations. Its probably going to be a hybrid final state here.
Where is the data?
For managers contemplating the cloud for the first time or wanting to upgrade from a free account, what are some important questions to ask prospective providers?
First of all, know where the data will be.
We have an obligation, if we have protected information, to know where it is stored, CyberPoints McLarnon said. The internet connects us in ways we dont always grasp. We could be storing a file with a cloud-storage provider and it might be replicated in Oregon, Ireland or Eastern Europe.
He also recommended making sure your provider requires multifactor authentication to access your data. That means anyone looking for access has to provide multiple pieces of evidence to confirm their identity, such as typing in a password and identifying a security image.
Another important question concerns micro-perimeters around the data, said Brian Vecci, technical evangelist at software company Varonis Systems in New York City. Micro-perimeters are the equivalent of placing a wall directly around the data itself. (They) restrict access on a need-to-know basis and monitor and alert on all usage for unusual behavior. The idea is that a cybercriminal can and will scale the walls of a castle, so lets put the crown jewels in a locked safe within a locked room that is monitored 24/7.
Then there is air gapping. This is the practice of making sure that there is a physical barrier between unsecured public WiFi networks and the place where your data is being stored.
Some cities are not adapting to the cloud as fast as others. Rochester in particular is lagging behind in implementing the cloud, Newman said.
Rochester for 100 years was one of the innovation capitals of the world in everything from digital, optical, medical, research and even beer (industries), he said. Definitely in the cloud world were lagging behind. If you go to one of the big cities theyre much more proactive about moving to the cloud.
With top technical universities such as Rochester Institute of Technology and the University of Rochester, the city should stay relevant in terms of cloud computing, Newman said.
Theres definitely people doing it; were just not as advanced as some of the other areas. But we should be, he said. We have just incredible universities here so we really dont have an excuse not to be.
Today, cloud usage is already near-universal in the business world, and that trend shows no sign of reversing. More data will move to and pass through the cloud as costs go down and functionality goes up, Vecci said. The companies that will be ahead of everyone else will be the ones that leverage unified data-security platforms for connecting cloud and on-premise systems into a single framework.
(c) 2017 Rochester Business Journal. To obtain permission to reprint this article, call 585-363-7269 or email madams@bridgetowermedia.com.
Go here to read the rest:
Cloud on your horizon? What managers should ask about data storage - Rochester Business Journal
Best security apps for email – The Daily Dot
Safely using the internet is becoming increasingly important and challenging. We now rely on the web for sensitive communication, the transfer of health information, trade secrets, and so much more.
Not surprisingly, the most popular service providers are not necessarily the most reliable in terms of security and privacy. Big corporations usually have a vested interest in collecting and scrutinizing user data to enhance their ads and services. That does not sit well with all users.
So for those of you who are not very fond of big tech companies peeping into your data, weve compiled a list of applications and services that provide better privacy in each category.
As it has become the de facto official method of communication in many domains, email is one of the most sensitive services everyone uses. When youre using a service such as Gmail, your data is safe in transitbut not in storage.
This means that the service provider has access the content of your emails, which can become an issue if its servers are hacked, or if the service decides to give away your data to government agencies.
A more secure alternative to mainstream email providers is Protonmail, an email service that uses end-to-end encryption to secure your messages. This is a form of encryption that makes sure only the sender and the receiver of the email can read it. So while Protonmails servers store your emails, it doesnt have the keys to decrypt them.
What are the tradeoffs? The features that come with a Protonmail free account are considerably limited, and you dont get all the AI-powered features that the likes of Gmail give you. But that might be a small price to pay if you value privacy.
Screengrab via ProtonMail
READ MORE:
There are dozens of messaging apps, but not all of them are equally secure. Weve previously discussed the principles of secure messaging here on the Daily Dot.
Signal, the open-source messaging app favored by Ed Snowden, features default end-to-end encryption, deletable messages, and minimal storage of metadata. Signals protocol has become the golden standard for encryption and has been adopted by other apps.
WhatsApp is one of the applications that implements Signals protocol, which makes it equally secure. Though for the sake of user convenience, WhatsApp has forgone enabling some of the security mechanism by default. And since WhatsApp is owned by Facebook, a commercial company, it might have a tendency to make its own uses of whatever data it has on you.
Photo via Whisper Systems
When you access the internet through your browser, your internet service provider (ISP) will be able to see which sites you visit, as will any other party who might be monitoring your traffic. In case the websites are not encrypted, ISPs will also have access to the pages you visit and the data you send to them.
The Tor browser is an open-source application that leverages the Tor network, a open network of volunteer nodes that help protect against traffic analysis and surveillance. When you use Tor, all of your browser traffic is encrypted and deflected across several Tor nodes before eventually reaching its destination. An eavesdropper will know that youre using Tor but wont be able to see your traffic.
There are some tradeoffs to using Tor, however. Internet speed slows down as your traffic has to make several hops, and some websites will block access to Tor nodes.
Screengrab via Tresorit
As with email, storing your files in the cloud will make them accessible to the service provider and anyone else who gains access to their servers or your account.
A secure alternative to Google Drive and DropBox is Tresorit, a storage service that uses end-to-end encryption to secure users files. Like most storage services, Tresorit enables you to store your files in the cloud, sync them with local folders, and share them with friends and coworkers.
Tresorit doesnt come for free, however. It costs a minimum of $10 monthly for 1 TB of end-to-end encrypted storage, which is a small price to pay for peace of mind.
Ben Dickson is a software engineer and the founder of TechTalks. Follow his tweets at @bendee983 and his updates on Facebook.
Read the original:
Best security apps for email - The Daily Dot
Hitachi Content Platform 8 focuses on cutting storage cost – TechTarget
Hitachi Data Systems' latest object storage upgrade includes revamped licensing and denser nodes to try and convince customers that on-premises cloud storage can cost less than putting data into a public cloud.
Hitachi Content Platform (HCP) 8 launched today, supporting 10 TB helium hard disk drives (HDDs), geographically distributed erasure coding, and KVM hypervisors along with new licensing.
HDS claimed the new HCP version lowers storage costs by enabling 67% higher capacity per storage node and supporting 55% more objects than in the past. Customers can store 1.25 billion objects per node if they use 800 GB mirrored solid-state drives (SSDs) to manage the HCP object metadata database, according to Tim Desai, a senior product marketing manager at HDS.
HDS said the new licensing offers customers the option to pay the lower price between data ingested and actual capacity consumed. The price difference depends on the ratio of deduplication and compression of the ingested data set.
"HDS is looking to dispel the myth that public cloud storage is always cheaper than on-premises cloud storage," said Steven Hill, a senior storage analyst at 451 Research. "Unstructured data is becoming a huge problem for business, and object storage offers the metadata capabilities to help with that."
Object-based HCP is Hitachi's main platform for building private clouds.
"We are seeing HCP evolve to this cloud platform, which is why we're so focused in all the enhancements around optimizing costs for cloud infrastructure," said Tanya Loughlin, director of content cloud and mobility product marketing at HDS. "Erasure coding is your best bet for massive petabyte-scale cloud deployments."
A six-site geo-distributed erasure coded configuration, using the new 10 TB Seagate helium-based SAS HDDs with compressed and deduplicated data, could bring a 400% increase in storage per cluster, Desai said.
HCP object storage is sold through an appliance-based package, a software-only version for VMware's ESXI or KVM, or as a managed service through a cloud-based pay-as-you-go model.
The former licensing model offered three options: "active" for direct-attached array-based capacity; "economy" for the high-volume, low-cost tier with storage server nodes; and "extended" for capacity under management for data tiered off to the cloud or other storage nodes.
HDS is replacing the active array and economy storage server (S node) licenses and with basic and premium options, while keeping the extended license for moving content to the cloud.
Under the new pricing model, HDS will sell new licenses in units of 1 TB of usable storage. A basic license costs 20% less than the premium option and is designed for a single tenant and management plane and includes 10,000 namespaces, REST-only protocols, a metadata database, geo-distributed erasure coding, replication, compression and deduplication.
The HCP premium license supports 1,000 tenants and management planes and all basic features plus REST and non-REST protocols, metadata indexing and search capabilities, legal hold/shredding/retention and SAN storage (zero copy failover). An extended license inherits properties from the premium or basic options and protects metadata only.
Geo-distributed erasure coding facilitates data protection across regions with the potential to reduce build times and use less storage capacity than replication. HDS supports a 20+6 erasure code configuration, where each part of an object is encoded into 20 data fragments and six parity fragments distributed across six storage nodes or sites. The object can be recovered if six data fragments are lost or unavailable due to a large-scale outage or other catastrophic event.
"There are many object storage solutions that support geo-distributed erasure coding. It's becoming more and more of a table-stakes feature for customers," Desai said. "Even if they're not going to use it, they've heard about enough that they want to see in any solution that they're considering."
Hitachi Content Platform previously supported only RAID-based data protection and local erasure coding. Desai said customers used RAID when attaching a SAN to HCP, but they're increasingly using HCP with S nodes, which have built-in erasure coding for data protection. He said customers could move to geo-distributed erasure coding without downtime or a forklift upgrade.
"If you have more than two sites, you're going to begin to see cost savings with geo-distributed erasure coding. When you get up to six sites, it's substantially more," Desai said. He noted that HDS works large media and entertainment companies that get faster rebuild times with large files such as movies using erasure coding.
Support for the open-source KVM hypervisor gives HPC users a less expensive alternative and a dedicated management port to separate user and administrator network traffic for added security.
HDS also recently updated the Hitachi Data Ingestor (HDI) cloud/file gateway and HCP Anywhere file sync and share product that are also part of the HCP product portfolio, along with the Hitachi Content Intelligence search-and-analytics option launched in November. Desai said about half of the HCP customers own more than one portfolio product.
HDI improvements include new multipart file transfer capabilities to enable faster uploads of large files to Hitachi Content Platform. HCP Anywhere enhancements include a next-generation Windows client for cloud home directories and virtual desktop infrastructure and user interface improvements for mobile applications and Web portals.
Object storage can reduce storage complexity
Guidance on building a private storage cloud
Object storage protects against ransomware
Read this article:
Hitachi Content Platform 8 focuses on cutting storage cost - TechTarget
Overcoming Healthcare Hybrid Cloud Storage Challenges – HITInfrastructure.com
Source: Thinkstock
June 06, 2017 -Hybrid cloud storage environments are a reality for healthcare organizations seeking to digitize their health IT infrastructure. However, many entities are challenged when deciding which workloads and applications are best suited for the cloud, and which are better off remaining on-premises.
ClearSky Data CEO Ellen Rubin told HITInfrastructure.com that hybrid cloud storage is here to stay for the foreseeable future. While some organizations are eager to move parts of their IT storage infrastructure to the cloud, many are still hung up on which data sets belong on which type of storage solution.
Healthcare organizations are living in this hybrid reality for the foreseeable future so they need to decide how they want to handle the portfolio of infrastructure, Rubin stated. Thats what entities are trying to figure out right now.
Some workloads will never go to the cloud, she continued. Some things can go and may keep primary production data in their data center but theyll use the cloud for archival or backup. Organizations will be comfortable about certain workloads being built in the cloud and living there permanently.
Rubin typically sees all of these instances present in every healthcare cloud storage client. Organizations are faced with overcoming general distrust of cloud storage, while making informed decisions on which data lives where.
The healthcare industry is unique because it has such strict compliance and security concerns, explained Rubin. Healthcare companies are all trying to adopt the cloud.
They have a huge infrastructure that has always been in their data centers and under their control. They have to manage and run the data center while supporting very heavy workloads for EHRs and analytics and everything else involved with running the business.
Data protection is the number one concern healthcare organizations face when determining the best way to store their data. Rubin finds that health IT infrastructure presents a very large, expensive, and complex set of data challenges at the storage level.
Healthcare organizations want to move to a cloud environment that saves money and comes with exciting and compelling features, said Rubin. They are also trying to adopt digital transformation strategies, which means they need to develop and release software more quickly.
Entities are looking to the cloud model because it gives them faster delivery and more agility, she continued. Organizations are sometimes stuck in a box as they try to innovate and add to their storage infrastructure.
IT departments must appease patients and clinicians with faster and more innovative IT infrastructure. However, these departments are also hesitant to let go of their current infrastructure because data stored on-premises is under their control.
Rubin said she encounters this dilemma with every healthcare organization considering a cloud solution.
Healthcare is divided on their opinion of the cloud for data storage, said Rubin. Some are eager and excited to move to the cloud while others are struggling with how to make cloud storage work in a way that doesnt put the data at a higher risk.
In addition to cloud security concerns, organizations are also faced with justifying rebuilding workloads and applications that are running smoothly in their traditional data center environment. Rebuilding workloads and applications for the cloud is a strenuous undertaking that makes a migration strategy particularly important.
What you see happening is security teams weighing in on what workloads can safely move to the cloud, Rubin explained. Its a negotiation trying to figure out which workloads an organization would consider moving to a cloud environment. Are there any new projects the IT department would be willing to start with in the cloud? Are there any workloads we can migrate off that feel less risky?
Its hard, she continued. A lot of the time cloud service providers are being asked to help the customer make their transition to the cloud and work with them while they figure out their approach.
Organizations need to decide where to store their data based on the nature of the application and the nature of the data, Rubin maintained.
Data that is patient identifiable or PHI is sensitive, explained Rubin. Even organizations that are heavily using the cloud may hold off on patient data and chose to move internal IT-types of workloads where data is sensitive but not extremely compliance oriented.
Entities typically start by moving low-end or smaller workloads and applications to build up confidence before migrating more sensitive or robust workloads. Those workloads that contain too much data may not be worth the trouble to migrate.
Entities often start with newer datasets that grow once they are deployed in the cloud.
Generally, data migration is determined by where the application wants to run. The standard rule is that larger applications that need to scale up on the compute side are a better fit for the cloud.
Things the customer feels dont need to live in their data center anymore, like VMware environments, are moved to the cloud because they take up a tremendous amount of resources, said Rubin. The VMware environment is already virtualized and organizations are willing to consider that it doesn't have to sit on physical gear.
Its easier to move a VMWare environment to the cloud than workloads that havent been virtualized, she added. There are characteristics of the application that make it more cloud friendly or less challenging to move.
Once its decided what workloads and applications are moving to the cloud, the actual migration needs to take place. Rubin finds that healthcare organizations tend to underestimate how difficult it is to migrate to the cloud.
Initially, organizations underestimated how hard it would be to deploy real IT infrastructure in the cloud, said Rubin. They didnt think about data migration, latency, or rebuilding applications for the cloud. These are the things that organizations have been learning the hard way over the past several years.
The cloud is much different than what IT administrators are used to deploying in their legacy infrastructure environments.
Organizations should start with astrong understanding of the skills and processes involved before and after the migration occurs, including management and maintenance requirements.
Organizations are now much smarter because they have tried different cloud tactics and solutions to varying degrees of success, Rubin explained. They realize that managing the cloud is managing infrastructure, too, even though the data is not in an organizations data center and they dont have to scale or handle data on-premise.
The network requirements of data migration also present an obstacle for many healthcare organizations.
The ability to just move data in terms of the bandwidth that would be available for a large, heavy movement to the cloud or from the cloud is where things get ugly, Rubin said.
An organization has to be willing to invest to upgrade and scale their network. In some ways, its a short-term problem. If you have a dataset and youve moved it, then youre done because everything that happens with the data from that point is in the cloud.
Rubin advised that entities think about the migration processes holistically by involving the security team and the finance team from the beginning of the cloud deployment and migration process.
The cloud providing capacity on demand is a change for the finance department. The security team also needs to be involved in the process to ensure that organizations dont sign deals with vendors that dont adhere to their security protocols, Rubin explained.
Trials and proof of concept to get comfortable with the technology and seeing how it runs is also necessary for a successful cloud deployment. Organizations need to be sure that users are able to easily retrieve data from the cloud and that workflow is the same or better than before the cloud deployment.
In healthcare organizations especially, making a change brings a lot of questions, Rubin concluded. Showing that the cloud solution works before its fully deployed gives users confidence in the technology and reassures that their workflow will not be disrupted by the change.
Read this article:
Overcoming Healthcare Hybrid Cloud Storage Challenges - HITInfrastructure.com
iCloud vs. Google Drive: Apple ignites a price war for cloud storage – SlashGear
Apple went into overdrive with the announcements it made yesterday during its WWDC 2017 keynote, but one small yet important change to iCloud wasnt given much attention. It seems that Apple has reconsidered its iCloud pricing, ultimately giving heavy iCloud users more storage for their money. This could very well ignite a cloud storage pricing war between iCloud and competitors like Google Drive at a pretty important moment.
Lets start off with the changes Apple is making to iCloud. For quite some time now, Apple has offered a few different storage tiers. You have the option of getting 5GB for free, 50GB for $0.99 per month, 200GB for $2.99 per month, 1TB for $9.99 per month, and finally, 2TB for $19.99 per month. The changes Apple has implemented involve the 1TB and 2TB tiers.
In fact, Apple is getting rid of the $19.99 tier entirely. As of yesterday, everyone on the $9.99 1TB plan has been automatically upgraded to 2TB of storage. As a result, Apple has updated its monthly pricing table for regions around the world, which you can check through the source link below.
This is an interesting move, but it certainly seems premeditated. Yesterday, Apple announced a new Files feature for iPads coming in iOS 11. Files works a lot like a file manager on a traditional desktop, not only showing you the files and folders you have stored locally on your device, but also letting you directly pull files from a variety of cloud-based storage services.
Notably, one of the services compatible with Files is Google Drive. With this change in pricing, Apple is pretty severely undercutting Googles pricing, which still comes in at 1TB for $9.99 per month. Though Apple will allow you to easily access Google Drive, this pricing shift further cements the fact that Apple wants as many iOS users on iCloud as possible.
This could very well spark a pricing war between Apple and Google. With Drive more easily accessible to iPad users in iOS 11, Google may be tempted to bump its 1TB tier up to 2TB, which in the end is a great thing for everyone, not just iOS users. Well have to watch how Google responds to this pricing shift in the coming weeks, but dont be surprised to see Google make a few tweaks of its own.
SOURCE: Apple
Read this article:
iCloud vs. Google Drive: Apple ignites a price war for cloud storage - SlashGear
MapR diversifies to cloud storage market – ZDNet
In the Hadoop world, we've noted that MapR has always marched to a different drummer. Its core file system is the building block for its Converged Platform, where you can squash the Lambda architecture, running batch, interactive, and real-time on the same cluster. Putting IoT money where its mouth its, MapR also recently added edge processing to extend the footprint of its converged platform.
But MapR doesn't want you to think of it as solely a Hadoop platform. Big on Data bro Andrew Brust previously reported on MapR getting container religion, and then hawking its file system to SAP for providing low-cost cloud storage. Cue the drum roll: that sets the stage for today's announcement that MapR is rebranding its file system to MapR-XD.
MapR-XD extends on the core NFS/Posix-compliant interface of MapR-FS by adding a new object interface to Amazon S3 cloud storage. The target use case is classic data tiering, but extended to the cloud, In this case, colder data is moved to S3 but is managed as a logical single view from MapR-XD. Specifically, the file system metadata remains the same, as are security policies.
The new S3 support from MapR-XD is a response to more tightly interweaving S3 into the online system. Traditionally, S3 was the place where cloud data was stored, but when it was consumed by managed Hadoop services such as Amazon EMR, it was marshaled into its own file system. Amazon on its own has been exposing S3 data through ad hoc SQL query services like Athena, and more recently to Redshift through Spectrum.
The new XD release extends tiering from cool to hot data with new support for Flash storage. We expect that this will be part of a trend that will eventually add support of NVRAM, an emerging form of storage promising nearly the speed of memory with the density (and scale) of Flash.
Covering data platforms, one has to have enough knowledge about storage to be dangerous. As MapR added the option for selling its storage system a la carte, our initial thought was, why enter a mature space already dominated by household names? While MapR's file system share the same Posix-compliance as EMC, NetApp and other enterprise storage stalwarts, it is positioning its offering as a lower priced alternative taking advantage of modern, commodity scale-out hardware. Well, that makes it look a bit like EMC Isilon.
But then again, MapR is positioning this for cloud storage - it's not trying to replace on premise enterprise storage,. But then again, for truly scalable storage, object stores such as S3, Azure BLOB storage, Google Cloud Storage are the default. Incumbents like Dell EMC are also trying to home in on the action.
So MapR is trying to thread a needle. The not-yet public company has had success with its current model, recording 81% YoY revenue growth for the quarter that just concluded at the end of April. That growth has happened in a much smaller fishbowl. When it comes to cloud storage, it's going up against Goliaths whose economies of scale it will never match. MapR is aiming to deliver a scalable storage system that is cost-effective enough to justify its slight price premium compared to cloud object storage. It is doing so as the Amazons and Googles of the world are brute forcing direct access to their cloud stores to make them perform "good enough." But for MapR, tilting against windmills remains part of its core DNA as much as ever.
What is NAS? A Practical Guide – Cloudwards
Network-attached storage, or NAS, is a great way tostore large amounts of data while also makingit accessible from anywhere. Here at Cloudwards.net we recommend that a NAS be part of any thorough backup strategyand in this article well go through what is NAS and, more specifically, what a NAS is not.
Well be taking a quick look at what a NAS can and cannot do, as well as how you can use it in your personal setup to enlarge your storage space, while also keeping data secure for any smallproblems that may arise. Do not think, however, that network-attached storage is a backup solution by itself: if youre looking to keep your data safe, check out our best online backupservices or our best cloud backup for NAS article.
In essence, a NAS is a mini-server that sits on your desk. You can connect it directly to your computer through a USB cable, but that would negate its main benefit: the network. A NAS creates a small network all its own that any device with the right credentials (username and password) can access. A NAS is a step up from using a simple external HDD, and a step toward creating your own personal cloud storage.
If you want to know more about the differences between external HDDs and NAS, as well as how they interact with cloud backup solutions, we have a comparison article that explainsexactly that.
If you have a NAS set up, you can store data on it and then access it from any other device you own, just like with a cloud storage provider. For those of you wondering why not just get a service that you pay $5 per month rather than buy a $300 box, the main benefits are speed and lack of a third party.
NAS are faster than a remote cloud server: while your up- and download speeds to a remote server might depend on a host of factors from Internet connection, to server firmware, etc. the connectionspeed to your NAS is only limited by yournetwork speed and the NAS hardware. If that for whatever reasonyour network isnt fast enough, you can just connect your computer to your NAS directly using the USB and transfer data at your HDDs read/write speed.
The other main benefit to a NAS is that youre hosting the files yourself, without a third party having anything to do with it. Much like any of our best zero-knowledge providers, only you hold the keys to your data and you never need worry about government warrants or corporate intrusion. Privacy is assured when using a NAS, as long as you keep up your security protocols (more on that later).
As you can imagine, NAS are perfect for anyone handling large amounts of data: audio and video hobbyists, sole entrepreneurs, even small- to medium businesses can benefit from having a NAS in the office. Everyone that works with you can access the NAS and leave files or pick them up and, with the right software, even edit and manipulate files simultaneously.
The collaboration aspect of having a NAS within your organization makes it very attractive, indeed, especially if the NAS is hooked up to a LAN. Without the lag associated with cloud collaboration software, you can get work done faster than ever in your team and investing in a robust NAS should mean that youll see productivity skyrocket.
Not that having a NAS means you can cancel your cloud storage subscriptions: most NAS hit their limit at the 10TB mark and though you could set up some kind of networkedRAID array for more space, it may be better to store your archives with any of our best cloud storage providers, instead.NAS are great for storing current projects that you need to access at the drop of a hat, older ones are probably better off stored in a cheaper location.
Do note that the storage cap is one of the limitations of a NAS: if you have need of a huge amount of space you may want to consider setting up a file server instead (read our article on the best affordable serversfor small businesses if you want to know more). Though file servers and NAS work in a very similar fashion, a server is far more powerful and will give you even greater control over what goes in and out. The cost, however, rises as well as not only will you need more specific hardware, youll need to pay someone to maintain it.
If connectivity isnt your main concern, setting up a RAID array might also be an option if you need to store hugeamounts of data. It will likely be a lot cheaper than setting up a file server or NAS and will give you great speed as its basically just a huge hard drive. To find out more, check out our article on RAID.
Like anything connected via a network, NAS are vulnerable to outside attack by cybercriminals or spying by intelligence agencies like the NSA. Generally, the security measures that come with a NAS are perfectly adequate, but buyers should be aware that this is not always the case. Aperfect example is the Europol agentwho took confidential files home and stored them on his NAS, which did not automatically come with password protection.
Bureaucratic idiocy aside, this story shows that it pays to make sure youve followed all the instructions when setting up your NAS. Cloud security is no joke and it pays to check-double-check that youve done everything by the book. For more information, make sure to check out our NAS security guide.
Though there are plenty of manufacturers and vendors out there that offer network-attached storage, the general consensus seems to be that QNAP Systems and Synology offer the best bang for your buck. Though neither is particularly cheap (prices start at $200 and go up steeply from there), both offer plenty of bells and whistles as well as improved security measures.
The cheapest NAS usually come with around 1TB of storage and are perfect for people that dont need much space, but do want access to small, personal, remote server. A few steps up from that come larger bays that can fit up to 10TB and are great for medium-sized organizations. Any bigger than that and youre straying into more technical territory, where HDDs can be swapped out at will into ever larger arrays.
Here at Cloudwards.net, were particularly big fans of Synology (which is why we have an article dedicated to the best cloud backup for Synology) and we have put together a video with five essential tips youll need to know when usingyour Synology DiskStation.
In both cases, however, the main attraction is the software that comes with the NAS. Synology and QNAP offer high-end, user-friendly software that will make setting up the NAS easy and using it even more so. On top of that, more experienced users will be able to configure their NAS using this proprietary software to make their little box into their own personal cloud storage.
Do note, however, that if personal cloud storage is the ultimate goal, you may want to consider using CloudBerry Backup instead. This is a highly developed piece of software that will allow you to setup your NAS, integrate it with any existing cloud storage you may have and also configure your backup for the whole shebang. For a more detailed look at this service, check out our CloudBerry Backup review.
Speaking of backup, this brings us full circle to the beginning of the article: NAS are great and a wonderful alternative to cloud storage, but a backup solution they are not. If something happens physically to your NAS (fire, flood, electrical surge), that data is gone for good.
However, a NAS can take the place of cloud storage in a thorough hybrid backup strategyand there are plenty of great options out there for people that want to backup aNAS. Using any of our best online backup for NASservices will allow you to rest easy knowing that your data, come what may, is safely backed up somewhere on a remote and secure server.
NAS are wonderful machines that are perfect for anyone that wants to store large amounts of data, but wants to be able to access it from anywhere. If youre shackled to your home computer and only ever access files from there, you may want to look into setting up a RAID array rather than a NAS, but small- and medium-sized organizations, as well as people that are often on the move while working, will love the flexibility and speed a NAS offers.
Sign up for our newsletter to get the latest on new releases and more.
The downsides to NAS are that decent models do not come cheap and security needs to be a concern that you keep in mind at all times. We recommend that people unused to dealing with technology take their time when setting up their NAS to avoid ending up like our Europol agent mentioned earlier. Also, having a good backup plan in place never hurts.
Are you a NAS user? Share your experiences below in the comments, we would love to hear your take on all this. Thank you for reading.
Read more:
What is NAS? A Practical Guide - Cloudwards
Standardized data the key to making cloud storage a commodity – TheServerSide.com
The world needs standardized, cheap, fast, and reliable storage. And the world needs a lot of it, as it is estimated that by 2020 the amount of stored data will be fifty times larger than it was ten years prior.
Software engineers are approaching development and enterprise design in an entirely new way, thanks to the cloud. In this expert handbook, explore how your peers are leveraging the cloud to streamline app lifecycle management, save money, and make production and security more efficient.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
My prediction is that cloud storage will become a commodity, like electricity or bandwidth. The price of cloud storage isnt a race to zeroas some foolishly predict, but it is a race to the best price point and performance level technically achievable. And like electricity, it should be a one-size-fits-all service. Vendors like Amazon and Google have created artificial tiers standard, reduced redundancy, infrequent access, rearline, coldline, glacier and more, but they are all based on the same underlying disk storage. When you plug an electrical appliance into the wall socket, you dont have to choose what quality of electricity you want its basically one-size-fits-all. The same one-size-fits-all principle should also be applied to cloud-storage.
Commodities rely on standardization. There isnt a standardAPI for cloud storage yet, though Amazons S3 API is clearly at the head of the pack. Wouldnt it be nice if you could move your storage between cloud storage providers like Amazon to Google or Microsoft without changing one line of application code? Someday, I believe, this will be the case. API standardization is key to driving down price because it reduces or eliminates vendor lock-in. Once storage is truly portable, vendors will not be able to get away with locking you into their proprietary storage.
Price is probably the biggest determinant of success in a commodity market. Take a look at gasoline, for example. Its a classic commodity product: gas is gas. All regulargas is 87 octane. Any car can fill up at any service station.
Similarly, when it comes to data storage, it will just come down to price. If all cloud storage was fast, super-reliable, and secure, then the only thing that would matter is price. With commodities, standardization drives down price.
Two factors are at play. First, as cloud storage prices drop, the migration from on-prem storage to the cloud becomes more compelling. With cloud storage prices from Amazon, Google, and Microsoft in the $.02-.03 per GB per month range, there is no significant savings versus the total cost of ownership of on-prem storage. However, cut the cost of cloud storage to a conceivable $.0039/GB/Mo and savings from migration to cloud storage can be 75% or more.
The second factor concerning price is that as storage costs drop, many new things become economically feasible that were not previously. For example, suppose you want to create the next Instagram or Pinterest free apps that contain a lot of photos or videos. Your biggest cost will be storage. Your revenue will likely come from advertising. Your business model may never make money if you have to pay $.02/GB/month for storage, but it might be highly profitable if storage were at one-fifththat price.
Common wisdom says that a startup should not be able to store data as cheaply as an industry giant like Amazon. Yet history is replete with examples of innovation overcoming scale.
For nearly 100 years, US Steel was the largest steel company in the world. Then in 1968, a little company called Nucor invented the highly efficient steel mini-millwhich used the new technology of electric arc furnaces. Today, Nucor is the largest steel company in the U.S., and in May 2014, US Steel was removed from the S&P 500 index due to its declining market capitalization.
There are many other examples: AT&T vs. MCI. IBM vs. EMC, and so on.
My bet is that cloud storage will become just another piece of the infrastructure, like bandwidth or electricity. Almost every application needs storage. Nobody should be locked into proprietary vendor solutions. You should be able to unplug any cloud storage vendor and plug in some other vendor if you want to do that - with no price hit or service lock in. This is my dream: cloud storage that is so cheap, so fast, and so reliable that it works for almost any storage need.
Do your due diligence whenchoosing a public cloud provider
From our experts: Essentialelements in a cloud storage offering
Using a cloud storagegateway for primary data
Object storage: The building block of a cloud storage infrastructure
Deploying dynamic storage tieringin your environment
View post:
Standardized data the key to making cloud storage a commodity - TheServerSide.com