Category Archives: Cloud Storage
FloQast Unveils New Cloud Storage and Security Integrations – CPAPracticeAdvisor.com
FloQast, Inc.'s accounting software now integrates with leading cloud storage providers Microsoft OneDrive and Egnyte as well as cloud Single Sign-On(SSO) solutions from Google and Okta. FloQast is a provider of close management software created by accountants for accountants to close the books faster and more accurately.
The out-of-the-box integrations help simplify the setup and adoption of FloQasts close management software while bolstering security by providing secure access via SSO. These new integrations address enhanced security and governance requirements for security-conscious industries such as, among others, financial services, healthcare and aerospace.
With the new integrations of Microsoft OneDrive and Egnyte, FloQast close management software can directly and securely access financial data residing in Excel workbooks housed within these cloud storage applications. This innovative approach ensures that accountants can leverage the familiarity and flexibility of Excel while maintaining security and retaining ownership and control of their sensitive financial data. FloQast accomplishes this by securely accessing customer financial data from Excel-based account reconciliations to make certain all accounts are automatically tied-out against the General Ledger system. This approach reduces the risk of error and eliminates hours of manual work each month.
The integrations with Okta and Google SSO further strengthen security by supporting password complexity and Multi-Factor Authentication. Integration with the identity management solutions helps ensure FloQast close management software can only be accessed by authorized users which bolsters governance and security.
These new integrations complement FloQasts existing partnerships with Box, Dropbox and Google Drive.
The financial services industry -- specifically accounting -- is extremely security conscious as it constantly deals with high volumes of highly sensitive information, said Ronen Vengosh, vice president of business development at Egnyte. Egnytes integration with FloQast provides an easy-to-use interface for accounting professionals to collaborate on financial records and efficiently close their books, without losing custody of sensitive documents or risking violation of compliance regulations.
FloQast provides accounting teams a single place to manage the close and gives everyone visibility. These new integrations expand our current product footprint and extend our capabilities while also demonstrating FloQasts flexibility to address the myriad increasingly important concerns accountants face today , said Mike Whitmire, CPA, co-founder and chief executive officer of FloQast. The new integrations with Egnyte, OneDrive, Google SSO and Okta, along with our existing integration partners, ensure that enforce the highest levels of governance and security.
To learn more about how accounting teams can close faster and more accurately while securely using Excel, visit http://www.floqast.com/closesoftware.
FloQast is a leading developer of close management software, created by accountants for accountants to close faster and more accurately. Working with accounting teams existing checklists and Excel, FloQast provides a single place to manage the month-end close and gives everyone visibility. FloQast customers on average close three days faster. The award-winning solution is trusted by hundreds of accounting departments, including those at Twilio, Nutanix, Zillow and The Golden State Warriors. To learn more, visit http://www.floqast.com and join the conversation on Twitter @floqast.
Read more from the original source:
FloQast Unveils New Cloud Storage and Security Integrations - CPAPracticeAdvisor.com
FloQast rolls out new integrations for cloud storage and enhanced security – Accounting Today
FloQasts close management software for accountants now integrates with cloud storage solutions Microsoft OneDrive and Egnyte, and has been added to cloud single sign-on (SSO) ecosystems from Google and Okta. The integrations are mean to address high-level security and governance requirements for the financial services industry, among others.
The integrations with Microsoft OneDrive and Egnyte are designed to let FloQast users access financial data residing in Excel workbooks housed within these cloud storage applications. Within these integrated channels, FloQast accesses customer financial data from Excel-based account reconciliations to ensure all accounts are automatically tied-out against the general ledger system.
The Okta and Google SSO integrations are hoped to bolster security by supporting password complexity and multi-factor authentication. The idea is that within these ecosystems, FloQast close management software can only be accessed by authorized users.
FloQast already integrates with Box, Dropbox and Google Drive.
Read more from the original source:
FloQast rolls out new integrations for cloud storage and enhanced security - Accounting Today
Get talking to your third party cloud storage partners now, as GDPR is coming – www.computing.co.uk
Computing reminded the enterprise of some of the stickier points of GDPR this week, after appearing on HelpSystems' web seminar Preparing for GDPR: The First Steps to GDPR Compliance.
As host Donnie MacColl, director of EMEA technical services at HelpSystems, worked his way through GDPR's "eight rights", Computing's Peter Gothard supplied commentary on the realities of complying with the regulations, which come into force on 25 May 2018.
MacColl reminded viewers that the "Right to be informed" covers first contact' details as specific as those made between firms and customers through phone calls or websites.
"You have to be notified either by the privacy policy - which will now need to be updated for GDPR - that has to be concise and intelligible, how it's going to be stored and how long it's going to be stored," said MacColl.
"[So] if you call an insurance company and they say they're going to record [the call], they'll have to give you their policy."
MacColl also cited the example on "Right to access" of his own mortgage application having his wife recorded as an "interested party".
"Under the new rules," explainexcd MacColl, "that would have to be made clear."
As for the "Right to rectification" and the "Right to erasure" - i.e. the right to be forgotten' goes, Gothard had a warning:
"While i[these two points] are common sense and all very good - because it just makes sense to have control over our own data - from CIOs and also lawyers I've spoken to over the past year or so, the real problem is how mired organisations are going to be in their old way of doing things, and how you're going to keep finding legacy processes in place that you'd never thought of, that may actually be blocking these processes from happening [beyond your direct control].
"So it might be that while a company undertsands it needs to rectify or remove data, it's held in a third party's servers or in a place you can't easily get to it."
Thus getting to this data will require impressing the requirements of GDPR on a number of business partners, and "like untangling a ball of wool," observed Gothard.
"In such a shot amount of time, it's the part of GDPR that's probably going to require the most thought really," concluded Gothard.
Visit link:
Get talking to your third party cloud storage partners now, as GDPR is coming - http://www.computing.co.uk
SwiftStack object storage gets two-way Cloud Sync – TechTarget
SwiftStacks new software upgrade enables two-way data synchronization between its on-premises private cloud storage and the public cloud storage offerings from Google and Amazon.
Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
The SwiftStack object storage software 5 release, which became generally available today, enhances the Cloud Sync feature the San Francisco-based software vendor added late last year. Cloud Sync enables customers to set policies to automatically place data across private and public storage for off-site data protection and cloud-based archiving. SwiftStack stores data in the same object format both on premises and in the cloud.
SwiftStack object storage is based on open source OpenStack Swift software and runs on commodity Linux servers. The company sells and supports a commercial version of Swift, and its engineering team adds management services and features such as load balancing, metadata search and Cloud Sync. SwiftStack object storage also includes a file system gateway and the vendor is working on native file access capability.
The SwiftStack Cloud Sync feature formerly supported only one-way replication to a bucket in Amazon Simple Storage Service (S3) or Glacier or in the four Google Cloud Storage offerings. The new bi-directional replication capability is designed to ease collaboration with external partners, by synchronizing on-premises data with shared cloud buckets, and facilitate bursting to the cloud for additional resources.
"Before, if people were going to do a lot of compute, they had to more manually replicate data from a SwiftStack cluster to a spot in the public cloud and then replicate the results back," said Mario Blandini, vice president of marketing at SwiftStack. "Now that we have this going both ways, it allows you to use that elastic compute in the public cloud."
Scott Sinclair, a senior analyst at Enterprise Strategy Group, said the new Cloud Sync capabilities would also improve collaboration, by synchronizing data that users and applications modify within the public cloud back to the on-premises storage. He said the new capabilities would help SwiftStack to "transition from an on-premises software-defined infrastructure," with tiering to the cloud, "to more of a hybrid cloud storage layer."
"With this next step and the bi-directional aspects, the cloud infrastructure becomes a partner in the data center ecosystem," Sinclair said. "And if you follow this trend moving forward, theres the potential for [SwiftStack] to start delivering a more capable and more functional hybrid cloud infrastructure."
Steven Hill, a senior storage analyst at 451 Research, said the value of Cloud Sync replication was illustrated by the three-hour outage that Amazon S3 storage services experienced on February 28 "and the resulting chaos for websites and customers who depended solely on the S3 East Zone out of Virginia."
Hill said a website using SwiftStack object storage with Amazon S3 and Google Cloud Storage could have simply pointed to on-premises storage or the Google Cloud Storage when Amazon S3 became unavailable. That would, however, require customers to pay to store the same data in two clouds.
"The storage would have existed at more than one location. The challenge is keeping all those different resources in synchronization," said Hill, noting that thats what SwiftStacks Cloud Sync does.
SwiftStack object storage does not support direct bi-directional replication between two public clouds. Blandini said a future SwiftStack release would support synchronization from one public cloud to another public cloud for new "multi-cloud" use cases.
"Our vision has always been that we would [provide] data management for data that resides on prem or in 'n' number of public clouds," Blandini said.
Another upcoming feature Blandini mentioned is "policy balancing" to enable the system to put more data in one cloud versus another cloud if, for example, the cloud storage service prices were to change. The customer might have one or more copies of the data on premises and another copy in Google or Amazon, or distributed between the two public cloud services. SwiftStack plans to add policy balancing in the next four to six weeks, according to Blandini.
In the meantime, other new capabilities in SwiftStack 5 include multi-region erasure coding for data protection and support for deep buckets with more than 100 million objects from the OpenStack software distribution. SwiftStack is also making available a new desktop client for the Windows and Apple OS X operating systems.
Laura DuBois, group vice president for IDC's enterprise storage, server and system infrastructure software research, claims multi-region erasure coding is a "must-have" for any object-based storage.
DuBois said SwiftStack's main challenge is "go to market" against the major vendors of object-based storage such as Dell EMC, Hitachi Data Systems, IBM and NetApp.
"What sets [SwiftStack] apart is the file/object gateway, flexible deployment and OpenStack integration," DuBois wrote in an email. She estimated that about 10% of SwiftStack's customers use OpenStack, and many hail from the media and entertainment and life sciences industries.
Swift or Ceph: Which is the best choice?
OpenStack Newton release includes encryption
SwiftStack and Ciscoteam up for object storage
More here:
SwiftStack object storage gets two-way Cloud Sync - TechTarget
Australian crowdsourcing platform turns to cloud storage to fuel growth – ComputerWeekly.com
When Airtasker hit the magic hockey stick growth spurt it had always wanted, the Australian crowdsourcing service provider realised it needed to change its storage strategy.
This guide offers a comprehensive survey of the hybrid flash storage market. We give the lowdown on hybrid flash products from the big six storage vendors and the startups and specialists.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
When it was first launched in 2012, Airtasker easily attracted 45,000 users and had put through A$1m (US$760,000) worth of jobs in four years. Today, the platform boasts some 950,000 users.
We have been growing really fast over the past 18 months, said Paul Keen, chief technology officer at Airtasker, which has an annualised revenue run rate of A$85m for the 2017 calendar year. Eighteen months ago, that would have been close to A$15m.
Much of Airtaskers success may be attributed to its emphasis on building trust between job posters and workers who bid for 65,000 jobs each month, from transcribing videos to cleaning bedrooms.
Through the platform, workers will quote for jobs, while job posters award jobs based on the quotes and the workers task history, profile and reputation rating. Airtasker takes a 15% cut of the payment for completed jobs.
What we find now is that people assign and bid for tasks based on reputation we have 400,000 reviews on our site, said Keen. "When you have plenty of reviews to go by, you're likely to choose the person whos not necessarily the cheapest, but the best at a job.
Keen joined Airtasker a year ago when the company was climbing through its hockey stick growth curve. He had to make sure the companys technology enabled rather than impeded growth.
Then, Airtasker was using four production servers in a managed environment. They were about as powerful as a Macbook Pro, he said. It was a pretty basic scenario. We had a managed NAS (network attached storage), but we had no idea what it was really doing or when it would run out of storage.
Keen had already seen the problems that a shared NAS system could bring from his experience at his previous company. When that NAS went down, the whole site went down and theres nothing you could do, he said. Youve got to wait several hours for the environment to come back up.
To avoid similar issues, and the onerous task of managing a NAS system, Airtasker decided to move to Amazon Web Services (AWS), where infrastructure management would be taken care of. I dont know why you would want to manage storage nowadays when there are enough providers out there that can do it for you, said Keen.
Keen liked the fact that cloud service providers such as AWS offer snappy elasticity that ensures Airtaskers storage capacity scales up as the business grows.
Recently, we had to go from half a gigabyte of database storage to two terabytes, he said. It took just two clicks of a button for that to happen with zero downtime. Half an hour later, my entire storage environment was moved to the new environment.
The initial move to AWS, however, took more than two months, as Airtasker implemented an immutable infrastructure strategy, where IT components are replaced rather than changed when managing services and deploying software.
Immutable infrastructure is an approach that is only possible with cloud platforms, which have the automation capabilities required to build and deploy components. Airtasker worked with Rackspace on the deployment scripts.
Instead of upgrading our storage environment, we build a separate environment next to it, do some testing and switch users over. Once we are comfortable with the new environment, we destroy the old one, said Keen.
By building from scratch, Keen said he is now able to grow the companys storage capacity in an elastic manner, given that he knows exactly what is going on in the deployment script.
The best part is that this approach is invisible to Airtaskers developers, who need to work fast and focus on customer needs instead of firefighting. Theres no value in building storage environments, which should just be there for customers, said Keen.
Keen concedes there is supplier lock-in with using AWS, but it doesnt bother him. This is my third AWS migration and I have come to the conclusion that if you want to get the best out of these public clouds, theres a whole degree of supplier lock-in, he said.
Outside of IT, Airtaskers employees use Google Drive and Dropbox for cloud storage. We like that theres a lot of security around those services, and its the providers responsibility to ensure governance. We also store all the logs, but thats more for audit purposes, he added.
See more here:
Australian crowdsourcing platform turns to cloud storage to fuel growth - ComputerWeekly.com
AWS claims human error to blame for US cloud storage outage – ComputerWeekly.com
Amazon Web Services (AWS) says human error caused the cloud storage system outage, which lasted several hours and affected thousands of customers earlier this week.
What to move, where and when. Use this checklist and tips for a smooth transition.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Amazons Simple Storage Service (S3), which provides backend support for websites, applications and other cloud services, ran into technical difficulties on the morning of Tuesday 28 February in the US, returning error messages to those trying to use it.
The cloud service giant revealed the cause in a post-mortem-style blog post, and explained the issue can be traced back to some exploratory work its engineers were doing to establish why the S3 billing system was performing so slowly.
During this process, a number of servers providing underlying support for two S3 subsystems were accidently removed, requiring a full restart, which caused the problems.
An authorised S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process, said the blog post.
Unfortunately, one of the inputs to the command was entered incorrectly and a largerset of servers was removed than intended.
This affected instances of S3 run out of the firms US East-1 datacentre region in Virginia, US, causing havoc for a number of high-profile websites and service providers, including the cloud-based collaboration platform, Box, and instant and group messaging site, Slack.
The outage also had a knock-on impact on a number of AWS services, hosted from US East-1, that rely on S3 for backend support, including Amazon Elastic Computer Cloud (EC2), AWS Elastic Block Store, and AWS Lambda.
It also caused the AWS service status page to stop working, causing problems for users keen to find out when the firms systems would be back up and running again.
The downtime has promoted numerous industry commentators to speak up about the risks involved with running a business off the infrastructure of a single cloud provider, while others have seized on it to reinforce the importance of having a robust business continuity strategy in place.
AWS, however, goes on to say its platforms are built to be highly resilient, but the full-scale restart of S3 took much longer than anticipated.
We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes, said the post.
While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years.
S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected, it added.
The incident has prompted AWS to re-evaluate the setup of its S3 infrastructure, the blog post continues, to prevent similar incidents from occurring in future.
Wewant to apologise for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further, it concluded.
More here:
AWS claims human error to blame for US cloud storage outage - ComputerWeekly.com
Overcome problems with public cloud storage providers – TechTarget
If you have a new app or use case requiring scalable, on-demand or pay-as-you-go storage, one or more public cloud storage services will probably make your short list. It's likely your development team has at least dabbled with cloud storage, and you may be using cloud storage today to support secondary uses such as backup, archiving or analytics.
Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
While cloud storage has come a long way, its use for production apps remains relatively limited. Taneja Group surveyed enterprises and midsize businesses in 2014 and again in 2016, asking whether they are running any business-critical workloads (e.g., ERP, customer relationship management [CRM] or other line-of-business apps) in a public cloud (see "Deployments on the rise"). Less than half were running one or more critical apps in the cloud in 2014, and that percentage grew to just over 60% in 2016. Though cloud adoption for critical apps has increased significantly, many IT managers remain hesitant about committing production apps and data to public cloud storage providers.
Concerns about security and compliance are big obstacles to public cloud storage adoption, as IT managers balk at having critical data move and reside outside data center walls. Poor application performance, often stemming from unpredictable spikes in network latency, is another top-of-mind issue. And then there's the cost and difficulty of moving large volumes of data in and out of the cloud or within the cloud itself, say when pursuing a multicloud approach or switching providers. Another challenge is the need to reliably and efficiently back up cloud-based data, traditionally not well supported by most public cloud storage providers.
How can you overcome these kinds of issues and ensure your public cloud storage deployment will be successful, including for production workloads? We suggest using a three-step process to assess, compare and contrast providers' key capabilities, service-level agreements (SLAs) and track records so you can make a better informed decision (see: "Three-step approach to cloud storage adoption").
Let's examine specific security, compliance and performance capabilities as well as SLA commitments you should look for when evaluating public cloud storage providers.
Maintaining cloud data storage security is generally understood to operate under a shared responsibility model: The provider is responsible for security of the underlying infrastructure, and you are responsible for data placed on the cloud as well as devices or data you connect to the cloud.
All three major cloud storage infrastructure-as-a-service providers (Amazon Web Services [AWS], Microsoft Azure and Google Cloud) have made significant investments to protect their physical data center facilities and cloud infrastructure, placing a particular emphasis on securing their networks from attacks, intrusions and the like. Smaller and regional players tend also to focus on securing their cloud infrastructure. Still, take the time to review technical white papers and best practices to fully understand available security provisions.
Though you will be responsible for securing the data you connect or move to the cloud, public cloud storage providers offer tools and capabilities to assist. These generally fall into one of three categories of protection: data access, data in transit or data at rest.
Data access: Overall, providers allow you to protect and control access to user accounts, compute instances, APIs and data, just as you would in your own data center. This is accomplished through authentication credentials such as passwords, cryptographic keys, certificates or digital signatures. Specific data access capabilities and policies let you restrict and regulate access to particular storage buckets, objects or files. For example, within Amazon Simple Storage Service (S3), you can use Access Control Lists (ACLs) to grant groups of AWS users read or write access to specific buckets or objects and employ Bucket Policies to enable or disable permissions across some or all of the objects in a given bucket. Check each provider's credentials and policies to verify they satisfy your internal requirements. Though most make multifactor authentication optional, we recommend enabling it for account logins.
Data in transit:To protect data in transit, public cloud storage providers offer one or more forms of transport-level or client-side encryption. For example, Microsoft recommends using HTTPS to ensure secure transmission of data over the public internet to and from Azure Storage, and offers client-side encryption to encrypt data before it's transferred to Azure Storage. Similarly, Amazon provides SSL-encrypted endpoints to enable secure uploading and downloading of data between S3 and client endpoints, whether they reside within or outside of AWS. Verify that the encryption approach in each provider's service is rigorous enough to comply with relevant security or industry-level standards.
Data at rest:To secure data at rest, some public cloud storage providers automatically encrypt data when it's stored, while others offer a choice of having them encrypt the data or doing it yourself. Google Cloud Platform services, for instance, always encrypt customer content stored at rest. Google encrypts new data stored in persistent disks using the 256-bit Advanced Encryption Standard (AES-256) and offers you the choice of having Google supply and manage the encryption keys or doing it yourself. Microsoft Azure, on the other hand, enables you to encrypt data using client-side encryption (protecting it both in transit and at rest) or to rely on Storage Service Encryption (SSE) to automatically encrypt data as it is written to Azure Storage. Amazon's offering for encrypting data at rest in S3 is nearly identical to Microsoft Azure's.
Also, check for data access logging -- to enable a record of access requests to specific buckets or objects -- and data disposal (wiping) provisions, to ensure data's fully destroyed if you decide to move it to a new provider's service.
Your provider should offer resources and controls that allow you to comply with key security standards and industry regulations. For example, depending on your industry, business focus and IT requirements, you may look for help in complying with Health Insurance Portability and Accountability Act, Service Organization Controls 1 financial reporting, Payment Card Industry Data Security Standard or FedRAMP security controls for information stored and processed in the cloud. So be sure to check out the list of supported compliance standards, including third-party certifications and accreditations.
Unlike security and compliance, for which you can make an objective assessment, application performance is highly dependent on IT environment, including cloud infrastructure configuration, network connection speeds and the additional traffic running over that connection. If you're achieving an I/O latency of 5 to 10 milliseconds running with traditional storage on premises, or even better than that with flash storage, you will want to prequalify application performance before committing to a cloud provider. It's difficult to anticipate how well a latency-sensitive application will perform in a public cloud environment without actually testing it under the kinds of conditions you expect to see in production.
Speed of access is based, in part, on data location, meaning expect better performance if you colocate apps in the cloud. If you're planning to store primary data in the cloud but keep production workloads running on premises, evaluate the use of an on-premises cloud storage gateway -- such as Azure StorSimple or AWS Storage Gateway -- to cache frequently accessed data locally and (likely) compress or deduplicate it before it's sent to the cloud.
To further address the performance needs of I/O-intensive use cases and applications, major public cloud storage providers offer premium storage capabilities, along with instances that are optimized for such workloads. For example, Microsoft Azure offers Premium Storage, allowing virtual machine disks to store data on SSDs. This helps solve the latency issue by enabling I/O-hungry enterprise workloads such as CRM, messaging and other database apps to be moved to the cloud. As you might expect, these premium storage services come with a higher price tag than conventional cloud storage.
Bottom line on application performance: Try before you buy.
A cloud storage service-level agreement spells out guarantees for minimum uptime during monthly billing periods, along with the recourse you're entitled to if those commitments aren't met. Contrary to many customers' wishes, SLAs do not include objectives or commitments for other important aspects of the storage service, such as maximum latency, minimum I/O performance or worst-case data durability.
In the case of the "big three" providers' services, the monthly uptime percentage is calculated by subtracting from 100% the average percentage of service requests not fulfilled due to "errors," with the percentages calculated every five minutes (or one hour in the case of Microsoft Azure Storage) and averaged over the course of the month.
Typically, when the uptime percentage for a provider's single-region, standard storage service falls below 99.9% during the month, you will be entitled to a service credit. (Though it's not calculated this way for SLA purposes, 99.9% availability implies no more than 43 minutes of downtime in a 30-day month.) The provider will typically credit 10% of the current monthly charges for uptime levels between 99% and 99.9%, and 25% for uptime levels below 99% (Google Cloud Storage credits up to 50% if uptime falls below 95%). Microsoft Azure Storage considers storage transactions failures if they exceed a maximum processing time (based on request type), while Amazon S3 and Google Cloud Storage rely on internally generated error codes to measure failed storage requests. Note that the burden is on you as the customer to request a service credit in a timely manner if a monthly uptime guarantee isn't met.
Also, carefully evaluate the SLAs to determine whether they satisfy your availability requirements for both data and workloads. If a single-region service isn't likely to meet your needs, it may make sense to pay the premium for a multi-region service, in which copies of data are dispersed across multiple geographies. This approach increases data availability, but it won't protect you from instances of data corruption or accidental deletions, which are simply propagated across regions as data is replicated.
With these guidelines and caveats in mind, you can better assess whether public cloud storage makes sense for your particular use cases, data and applications. If public cloud storage providers' service-level commitments and capabilities fall short of meeting your requirements, consider developing a private cloud or taking advantage of managed cloud services.
Though public cloud storage may not be an ideal fit for your production data and workloads, you may find it fits the bill for some of your less demanding use cases.
Companies move toward public cloud storage
Evaluate all variables in the cloud storage equation
Public, private or hybrid? What's the right cloud storage for you?
Original post:
Overcome problems with public cloud storage providers - TechTarget
Scality adds HALO cloud-based monitoring services to RING – TechTarget
Scality has added cloud-based monitoring services to its RING object storage that perform predictive analytics to improve uptime.
The Scality HALO Cloud Monitor, launched this week, is available for on-premises customers and service providers who build private clouds on RING.
Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
HALO remote cloud-based monitoring services allow users to gauge the health of servers, network bandwidth and storage. HALO uses machine learning to analyze previous system behavior and define a group of key performance indicators (KPIs) to detect changes in the storage environment that indicate potential problems in the system. The KPIs are predefined by Scality support experts, but users have the option to customize them.
The standard version of the HALO Cloud Monitor pulls information from 15 metrics for predictive analysis and capacity planning. Scality also offers a full-scale Dedicated Care Services (DCS) version that gathers data based on 100 metrics. Both versions support Amazon Simple Storage Service-based deployments.
HALO's centralized dashboard shows system-level statistics on memory, disks, CPUs and storage. It gives visuals of events and offers proactive incident detection and system health checks. Alarms are triggered when behavioral changes are detected in the object storage.
"It continuously measures from a user perspective to make sure [the system] is working well," said Daniel Binsfeld, Scality's vice president of DevOps and global support.
The standard version of the cloud-based monitoring service offering became generally available in January. DCS went into beta in February with general availability scheduled for this month.
George Crump, president and founder of analyst firm Storage Switzerland, said Scality's HALO cloud monitoring services show that object storage technology is maturing.
"It's almost a big data program," Crump said. "It analyzes the data that the object storage has about itself. It gives the ability to consume and act upon that data. Most systems have general information about what is going on, but they don't provide the ability to consume that data."
The standard version of the HALO Cloud Monitor pulls information from 15 metrics for predictive analysis and capacity planning.
Scality offers a 100% uptime guarantee with its DCS program. DCS provides capacity planning and root-cause analysis of the KPIs that detect problems in the system. DCS includes Scality's in-house support team. If the cloud-based monitoring service becomes unavailable at any time, the customer does not have to pay a service fee for the affected time period.
"If you use HALO with our DCS program, then our people are doing the monitoring," said Paul Turner, Scality's chief marketing officer. "Those customers get a 100% availability guarantee. We make sure the system is always up and running."
Scality RING software uses a decentralized distributed architecture, providing concurrent access to data stored on x86-based hardware. RING's core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster.
Object storage makes use of erasure coding for data resilience and to avoid the use of RAID. As the capacities of hard disk drives grow, RAID rebuilds become time-consuming when large drives fail.
Choosing storage for cloud-based applications
Scale-out NAS or object storage for unstructured data?
Object storage gaining ground over NAS
Read the original:
Scality adds HALO cloud-based monitoring services to RING - TechTarget
Last chance to get a lifetime subscription to pCloud Premium Cloud Storage for just $59.99 – Neowin
Today's highlighted deal comes from our Software section of Neowin Deals, where it's your last chance to save 87% off* a lifetime subscription to pCloud Premium Cloud Storage. Save, share and enjoy even your largest files quickly and securely with pCloud.
You've got too many files in your life to effectively manage on just one device, which is where pCloud comes in handy. A supremely secure web storage space for all of your photos, videos, music, documents, and more, pCloud gives you an easily accessible place to store your valuables without taking up any precious data on your devices. With unrivaled transfer speed and security, pCloud makes saving and sharing memories extremely easy.
For specifications, and license info please click here.
A lifetime license to pCloud Premium Cloud Storage normally represents an overall recommended retail pricing* of $478.80, but it can be yours for just $59.99 for a limited time, a saving of $418.81.
Stick with Neowin Deals and earn credit or even deeper discounts.
Get this deal or learn more about it | View more discounted offers in Online Courses
That's OK. If this offer doesn't interest you, why not check out our giveaways on the Neowin Deals website? There's also a bunch of freebies you can check out here.
Or try your luck on The Ultimate Entertainment Center Giveaway. All you have to do is sign up to enter this giveaway.
How can I disable these posts? Click here.
Disclosure: This is a StackCommerce deal or giveaway in partnership with Neowin; an account at StackCommerce is required to participate in any deals or giveaways. For a full description of StackCommerces privacy guidelines, go here. Neowin benefits from shared revenue of each sale made through our branded deals site, and it all goes toward the running costs. *Values or percentages mentioned above are subject to StackCommerces own determination of retail pricing.
Read this article:
Last chance to get a lifetime subscription to pCloud Premium Cloud Storage for just $59.99 - Neowin
Infrastructure provisioning made easier with hybrid cloud storage – TechTarget
Better access to digital information has opened new revenue opportunities in nearly every industry. Whether it involves business intelligence and analytics, mobility or internet of things, it's clear the next level of business competiveness is being built upon a foundation of data availability. None of this is news, of course. And, if you're reading this column, you are likely already heavily involved in the ongoing battle to ensure that IT resources keep pace with the increasing demands placed on business applications and data.
The good news is IT infrastructure innovation has accelerated as well. Hardware, for example, continues to become quicker and more affordable. As a result, storage systems perform faster, scale higher and hold more capacity than ever. In theory, you'd think the two, (1) growing demands served by (2) ever-more capable infrastructure, would cancel each other out. But that's not how it works in the real world. While there are a number of reasons for this inconsistency, one that doesn't get discussed enough is the time to provision new storage capacity.
When storage vendors discuss time to provision, they tend to focus on how easy it is to set up and configure an array physically located on site in a rack with adequate power and cooling. Here, setup time is often a very small portion of the entire process. The true time to provision, however, encompasses everything that occurs from once you identify a storage resource need to the moment newly acquired resources are made available to applications. The full end-to-end process can take months, and the few minutes or hours it takes to set up the final storage array is only a small part of the overall pain. Meanwhile, application demands continue to increase while the infrastructure provisioning process happens.
In this era where business competitiveness is often determined by data access, delays can negatively impact revenue opportunities and the bottom line as well.
Delays in provisioning infrastructure deployments not only slow down new IT initiatives. In this era where business competitiveness is often determined by data access, delays can negatively impact revenue opportunities and the bottom line as well. For years, you could address the time-to-provision challenge by simply deploying more storage capacity than immediately necessary, giving the environment room to support near-term growth during the sometimes lengthy process of new storage system procurement. While still considered a best practice by some, having excess infrastructure just sitting around doing nothing adds unnecessary cost, a nonstarter in this age of tighter budgets.
One obvious method for reducing the time to provision storage is using public cloud services. While this can provide near-immediate access to new capacity, performance and security, other business considerations often lead many firms to prudently retain a significant portion of data on premises. The trick here is to achieve cloud-like agility in storage provisioning while maintaining those on-premises capabilities required by many workloads. There are a number of options available to help improve, or at least mask, time-to-provision challenges for on-premises infrastructure, these include the following:
IT demands change so rapidly that new resources are often needed immediately, not months down the road. Some look to the public cloud to solve these challenges, but these services alone aren't right for everyone or every workload. In response, on-premises vendors are offering greater intelligence and more flexibility in payment options to ease the burden of deploying new capacity on site. While there are benefits to this approach, it can still be a challenge to match the agility of the public cloud. For that, hybrid clouds have stepped in as an excellent option to deliver on-premises performance and security while integrating the agility of public cloud infrastructure.
Provisioning a software-defined data center
How automation and provisioning mesh with cloud computing
Develop a private cloud environment
Read more:
Infrastructure provisioning made easier with hybrid cloud storage - TechTarget