Category Archives: Cloud Storage

Backup best practices for Microsoft Office 365 and cloud-based SaaS applications – TechRepublic

Microsoft Office 365 and other cloud-based SaaS applications are most effective when data backup best practices support them. Learn how to set up backup best practices here.

By David Friend, CEO and co-founder, Wasabi Technologies

Microsoft Office 365 and other cloud-based SaaS applications are a key working component of many business operations, enabling teams to share documents among multiple devices, easily house data in a private or public cloud, and work together in real time. But while Office 365 and similar SaaS apps store your data, if your data is lost, Microsoft does not guarantee that it will restore it. With Office 365 as a prime target for SaaS attacks this year, there is no protection against data deletion, regardless of if it is accidental, intentional or criminal.

SEE: Cloud Data Storage Policy (TechRepublic Premium)

To protect valuable company data, backing up data assets in SaaS applications has never been more critical. Below are some important backup and data protection strategies for cloud-based SaaS users. If you apply these backup best practices to your operations, they can help to minimize damages resulting from cyberattacks and other forms of data loss.

Jump to:

When organizations are evaluating their backup options, some may be enticed to take the traditional approach of backing up their data on-premises. However, to meet todays data security needs, this on-prem storage approach falls short, as large data volumes quickly outgrow on-prems capabilities. This adds increased complexity and costs to scale and maintain capacity.

In addition, on-prem backups require ongoing synchronization with live data, which can squander an organizations time and resources, especially if they are producing enormous amounts of data. Finally, on-prem storage leaves companies more susceptible to cyberattacks and data loss due to its format.

Since on-prem storage effectively acts as a single copy of data, it can easily be targeted and damaged. When this happens, data backups are rarely in place to help teams with data recovery. This problem can ultimately be prevented by utilizing the cloud or other complementary backup solutions, like Veeam, which are able to restore data if human error or a breach occurs.

Compared to on-prem backups, cloud storage is a more flexible, risk-averse and cost-effective option for cloud-based SaaS users. In addition to generally being less expensive compared to on-premises solutions, it requires less time and resources to manage since cloud vendors handle maintenance and configuration needs for their customers.

SEE: Checklist: Cloud storage management (TechRepublic Premium)

Cloud backups can also mitigate the impacts of data breaches and ransomware attacks through the following cloud-based best practices:

With data backed up to the cloud, companies can utilize recovery testing to identify errors in their data recovery process before attacks occur. Since the recovery process can be complicated and time-consuming when performed, testing this in advance can help address and eliminate any issues when a real attack occurs. The cloud enables businesses to easily test their recovery processes through its effortless access to data, providing ample amounts of preparation time.

Cloud providers can also offer immutable storage features, which prevent anyone even a systems administrator from adjusting, tampering with or deleting data during a set period of time. This storage and security feature is crucial to keeping files safe from corruption. Leveraging object-level immutability can prevent ransomware attacks when bad actors attempt to encrypt data, acting as an essential additional layer of protection for organizations cloud backups.

With a cloud backup or multicloud approach, organizations can diversify their backups and store their data in different environments. This is a more advantageous strategy compared to storing all data in one location, as it helps companies avoid the risk of losing everything during a single-system attack.

A 3-2-1 backup plan, which recommends companies keep three copies of data two on different media formats and one offsite is a smart cloud security approach in this situation. This strategy prevents hackers from accessing all the storage locations but also allows companies to continue functioning if an attack occurs, minimizing downtime.

Businesses can no longer ignore security threats for Office 365 and other cloud-based SaaS platforms. As these tools become more integrated into business operations at all levels, data that is stored in these platforms without appropriate backups is becoming extremely vulnerable in the face of growing cyber risk. Protecting data assets with the right tools, training, and cloud data storage and backup strategies can remedy these security issues before its too late.

David Friend is the co-founder and CEO of Wasabi, a revolutionary cloud storage company. Davids first company, ARP Instruments, developed synthesizers used by Stevie Wonder, David Bowie, and Led Zeppelin and even helped Steven Spielberg communicate with aliens in Close Encounters of the Third Kind. Throughout his career, David has founded or co-founded five other companies in the tech space. David graduated from Yale and attended the Princeton University Graduate School of Engineering, where he was a David Sarnoff Fellow.

See the original post:
Backup best practices for Microsoft Office 365 and cloud-based SaaS applications - TechRepublic

LucidLinks Expands Security in the Cloud by Appointing Director of Information Security and Privacy – Sports Video Group

LucidLink announced Dr. Randall Magiera has been named Director of Information Security and Privacy. Randall will be responsible for scaling the companys security and privacy programs, including compliance, certifications, and risk assessment. These programs will strengthen end-to-end data integrity for LucidLink services and business practices.

Weve always believed that security and data privacy are as much about protecting customer data as it is about safeguarding our own, said Peter Thompson, co-founder, and CEO of LucidLink. As our customer base expands to large enterprises with stringent compliance requirements, Randalls experience in cloud security and complex policies for global operations is crucial to our business. He is a great addition to our team.

Randall joined LucidLink from CloudCheckr, where he was responsible for security risk and compliance from 2019 until joining LucidLink in 2022. Before CloudCheckr, he held the Virtualization and Identity Management Admin & Deputy Information Security Officer position at Finger Lakes Community College (FLCC) in New York. At FLCC, he developed the educational institutions security strategies and led the design and execution of vulnerability and risk assessments, network hardening, disaster recovery planning, and attack surface optimization. Before FLCC, Randall was a senior associate in the Enterprise Risk Management division of Freed Maxick CPA. He provided information security and assurance consulting services to enterprise clients of Freed Maxick. Previous roles included IT engineering and technical admin support positions for cloud solutions, storage, and virtualization technologies. In addition to a business career, Randall is an adjunct professor teaching graduate-level information security and privacy classes for online programs.

Randall holds a doctor of science (D.Sc.) in cybersecurity and a doctor of philosophy (Ph.D.) in cybersecurity leadership. He has cybersecurity and privacy certifications, including CIPP/US, CIPP/E, CIPM, CISSP, and CISM.

Im honored to join the team at LucidLink, said Randall. Not only does the company offer customers business and productivity benefits with real-time access to encrypted, end-to-end cloud storage, but its zero-knowledge model speaks volumes to how serious this leadership team views data security.

Continued here:
LucidLinks Expands Security in the Cloud by Appointing Director of Information Security and Privacy - Sports Video Group

Komprise tells users: Go do it yourself Blocks and Files – Blocks and Files

Komprise has added self-service features for line-of-business (LOB) IT, analytics, and research teams to its unstructured data management software, lessening the burden on admins by giving users controlled read-only access to their individual data estates.

The Komprise Intelligent Data Management (IDM) product helps users manage and monitor unstructured data estates across their on-premises and public cloud environments and store their data more effectively. That means moving little-accessed data to cheaper storage, migrating data to the public clouds if needed and understanding which users access what data and that datas locations.

Todd Dorsey, a DCIG data storage analyst, said: User self-service is a growing trend to offload administrative tasks from central IT and give end users the ability to get the data and functionality they need faster. By putting more control in the hands of departmental teams and data owners, Komprise is helping increase value from unstructured data in the enterprise.

This is a tool for admin staff to give them the equivalent of night vision in a previously dark data environment and organize it so that access and storage costs are optimised. Komprise IDM is an example of a hierarchical storage management (HSM) or information lifecycle management (ILM) product.

At present, if central IT wanted to find out in detail what files and object data a user department needed on various storage tiers, tier-level retention and data movement policies, theyd have to ask the department which takes time and occupies both IT admin and user department resource.

Now, with the new software, central IT can authorize departmental end users to access IDM to look at and into their own data and its usage. These users are given a Deep Analytics profile, but only with read access. They can then monitor usage metrics, data trends, tag and search data and identify datasets for analytics, tiering and deletion, which only IT admins could do up until now.

This capability builds uponKomprises Smart Data Workflows, which enabled IT teams to automate the tagging and discovery of relevant file and object data across hybrid data storage silos and moving the right data to cloud services.

Potential applications for this include:

This addition of user-level DIY facilities should help to enable finer-grained unstructured data management and faster response by both IT admins and user groups to changing situations.

The autumn release of Komprises IDM software also features:

Komprise COO Krishna Subramanian claimed that the SMB protocol changes can accelerate data movements significantly: It is orders of magnitude around the 27 times mark that we benchmark; we will get that much. And its often much more than that.

The new capabilities are spreading Komprises use with its customers: We are seeing the adoption of our product really kind of scale out. Every day were getting new new use cases from all these departments, like some of them wanted to use it to run genomics, analysis, the cloud and things like that.

She said Komprises customer count was approaching 500. More of these customers are using AWS than Azure, with GCP usage trailing the older two public clouds.

Read the original:
Komprise tells users: Go do it yourself Blocks and Files - Blocks and Files

Edge storage: What it is and the technologies it uses – ComputerWeekly.com

Large, monolithic datacentres at the heart of enterprises could give way to hundreds or thousands of smaller data stores and devices, each with their own storage capacity.

This driver for this is organisations moving their processes to the business edge. Edge computing is no longer simply about putting some local storage into a remote or branch office (ROBO). Rather, it is being driven by the internet of things (IoT), smart devices and sensors, and technologies such as autonomous cars.All these technologies increasingly need their own local edge data storage.

Industry analysts Gartner confirm business data is moving from the datacentre to the cloud and the edge. The firm identifies four use cases for edge storage: distributed clouds and datacentres, data processing at the edge, content collaboration and access, and digital ingest and streaming.

This isnt an exhaustive list applications such as autonomous vehicles that sit outside enterprise IT are driving edge computing too. Meanwhile, industrial processes, sensors and IoT are all drivers that push more computing to the edge.

The market for edge storage is being shaped by changes in storage technology and by applications for edge computing. Increasingly, edge devices need persistent storage that is robust and secure, but applications also demand performance that goes beyond the SD or micro-SD cards found in early generation IoT devices and single board computers.

A few years ago, edge computing was most closely associated with remote or branch office (Robo) deployments. For storage, Robo was about providing at least some level of backup or replication to secure data, especially if a device failed, and caching or staging data before sending it to the datacentre for further processing. This batch-based approach worked well enough in retail and other environments with fairly predictable data flows.

But adding storage by way of a networked PC, a small server or a NAS device only really works in office or back office environments, because they are static, environmentally stable and usually reasonably secure.

Todays business edge is much larger and covers much more hostile operating environments. These range from the factory floor, with edge devices attached to manufacturing equipment and power tools, to cameras and other sensors out in the environment, to telecoms kit and even vehicles.

Enrico Signoretti, an analyst at GigaOM, describes these environments as the industrial edge, remote edge or far edge. Storage needs to be reliable, easy to manage and given the number of devices firms might deploy a cost-effective solution.

Edge applications require storage to be physically robust, secure, physically and virtually often encrypted and able to withstand temperature fluctuations and vibration. It needs to be persistent, but draw little power. In some cases, it also needs to be fast, especially where firms want to apply artificial intelligence (AI) to systems at the edge.

Alex McDonald, Europe, Middle East and Africa (EMEA) chair at the Storage Networking Industry Association (SNIA), says that edge storage includes storage and memory product technologies that provide residences for edge-generated data include SSDs, SSD arrays, embedded DRAM [dynamic random-access memory], flash and persistent memory.

In some cases, storage and compute systems need to be adapted to operate in much wider range of environments than conventional IT. This requires physical robustness and security measures. Single-board computers, for example, often rely on removable memory cards. Although encryption protects against data loss, it will not prevent someone physically removing the memory module.

Ruggedised and enhanced specification devices will support environments that require additional safeguarding in embedded applications, from automotive to manufacturing, says McDonald.

Organisations working with edge computing are also looking at storage class memory (SCM), NVMe-over-fabrics, and hyper-converged infrastructure (HCI).

Hyper-converged infrastructure, with its on-board storage, is perhaps best suited to applications that may need to scale up in the future. IT teams can add HCI nodes relatively easily even in remote locations without adding significant management overheads.

But for the most part, edge computings storage requirements are relatively small. The focus is not on multiple terabytes of storage, but on systems that can handle time-sensitive, perishable data that is then analysed locally and passed on to a central system usually the cloud or a combination of both.

This requires systems to be able to perform immediate actions on the data, such as performing analytics, before passing it on to a central store or process. This data triage needs to be nimble and, ideally, close to the compute resources. This, in turn, has prompted interest in NVMe-over-fibre channel and storage-class memory.

And, by putting some local storage into the device, systems designers can minimise one of edge computings biggest challenges its demands on bandwidth.

Organisations that want to add data storage to their edge systems do so, at least in part, to reduce demands on their networks and centralised datacentres, or to reduce latency in their processing.

Some firms now have so many edge devices that they risk overwhelming local networks. Although the idea of decentralised computing connected to the cloud is attractive, in practice network latency, the possibility of network disruption and even cloud storage costs have prompted device manufacturers to include at least support for local storage.

A growing number of vendors also make edge appliances that work alongside (or more accurately, just behind) IoT devices to gather data from them. Some are data transfer devices, such as Googles Edge Appliance, while some take on some of the AI processing itself, offloading it from the network.

By doing this, systems architects can provide a more robust form of edge computing. More data is processed near to the sensor or device, decisions can be made more quickly via analytics or AI, and the amount of data sent to the corporate LAN or cloud service can be vastly reduced.

Adding storage to the edge, directly or via appliances, also allows for replication or batch-based archiving and makes it easier to operate with intermittent or unreliable connections, especially for mobile applications. Jimmy Tam, CEO of Peer Software, says that some vendors are integrating hard disk drives in combination with SSDs to allow devices to store larger data volumes at a lower cost.

In the case where the edge storage is mainly focused as a data ingestion platform that then replicates or transmits the data to the cloud, a larger proportion of storage may be HDD instead of SSD to allow for more data density, he says.

It seems unlikely that any single storage technology will dominate at the edge. As Gartner notes in a recent research report: Although edge storage solutions possess common fundamental principles, it is not a single technology, because it needs to be tailored to the specific use cases.

Nonetheless, Gartner expects to see more data storage technology being edge ready, including datacentre technologies that work better with the demands of the edge.

IoT and other edge vendors will work to improve storage performance, especially by moving to server and workstation-class storage, such as Flash, NVMe and NVMe-over-fabrics, as well as storage-class memory, rather than USB-based technologies such as SD or micro-SD.

But the real focus looks set to be on how to manage ever larger numbers of storage-equipped devices. Developments such as 5G will only increase the applications for edge computing, so firms will look for storage that is not just rugged but self-healing and, at least in normal operations, can largely manage itself.

See original here:
Edge storage: What it is and the technologies it uses - ComputerWeekly.com

Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays – Forbes

New tools from Google's cloud unit will help healthcare organizations analyze and store medical images.

Google on Tuesday announced a new set of artificial intelligence tools aimed at letting healthcare organizations use the search giants software and servers to read, store and label X-rays, MRIs and other medical imaging.

The tools, from Googles cloud unit, allow hospitals and medical companies to search through imaging metadata or develop software to quickly analyze images for diagnoses. Called the Medical Imaging Suite, the tools can also help healthcare professionals to automatically annotate medical images and build machine learning models for research.

With the advancements in medical imaging technology, there's been an increase in the size and complexity of these images, Alissa Hsu Lynch, Google Clouds global lead for health tech strategy and solutions, said in an interview. We know that AI can enable faster, more accurate diagnosis and therefore help improve productivity for healthcare workers.

Based on Google's other forays into healthcare, privacy advocates may raise concerns that the tech giant, which makes the majority of its $257 billion annual revenue from personalized ads based on user data, would use patient information to feed its vast advertising machine.

Lynch says Google doesnt have any access to patients protected health information, and none of the data from the service would be used for the companys advertising efforts. Google claims the service is compliant with the Health Insurance Portability and Accountability Act, or HIPAA, a federal law that regulates the use of patient data.

The tech giant is working with a handful of medical organizations as early partners for the imaging software. One partner, a company called Hologic, is using the Google suite for cloud storage, as well as developing tech to help improve cervical cancer diagnostics. Another partner called Hackensack Meridian Health, a network of healthcare providers in New Jersey, is using the tools to scrub identifying information from millions of gigabytes of X-rays. The company will also use the software to help build an algorithm for predicting the metastasis of prostate cancer.

Google's software tools will help healthcare organizations to view and search through imaging data.

The new tools come as Google and its parent company Alphabet invest more heavily in health-related initiatives. In the early days of the pandemic, Alphabets Verily unit, which focuses on life-sciences and med tech, partnered with the Trump administration to provide online screening for Covid tests. Google also partnered with Apple to create a system for contract tracing on smartphones. Last year the company dissolved its Google Health unit, restructuring its health efforts so they werent housed in one central division.

Google has stirred controversy in the past for its healthcare efforts. In 2019, Google drew blowback for an initiative called Project Nightingale, in which the company partnered with Ascension, the second-largest healthcare system in the country, to collect the personal health information of millions of people. The data included lab results, diagnoses and hospitalization records, including names and birthdays, according to the Wall Street Journal, though Google at the time said the project complied with federal law. Google had reportedly been using the data in part to design new software.

Two years earlier, the tech giant partnered with the National Institute of Health to publicly post more than 100,000 images of human chest X-rays. The goal there was to showcase the companys cloud storage capabilities and make the data available to researchers. But two days before the images were to be posted, the NIH told Google its software had not properly removed data from the X-rays that could identify patients, according to The Washington Post, which would potentially violate federal law. In response, Google canceled its project with NIH.

Asked about Googles past fumble with de-identifying information, Sameer Sethi, SVP and chief data and analytics officer at Hackensack Meridian Health, says the company has safeguards in place to prevent such mishaps.

You never actually trust the tool, he told Forbes. He adds Hackensack Meridian Health works with a third-party company to certify that the images are de-identified, even after using Googles tools. We will not bring anything to use without expert determination.

Read the original post:
Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays - Forbes

Elastic Announces the Beta of New Universal Profiling and Additional Synthetic Monitoring Capabilities to Enhance Cloud-Native Observability -…

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the company behind Elasticsearch, today announced new features and enhancements across its Elastic Observability solution, enabling customers to gain deeper and more frictionless visibility at all levels of applications, services, and infrastructure.

Innovations across the Elastic Observability solution include:

Providing effortless, deep visibility for cloud-native production environments with zero instrumentation and low overhead, with always-on Universal Profiling

Elastics new Universal Profiling capability, now in private beta, provides visibility into how application code and infrastructure are performing at all times in production, across a wide range of languages, in both containerized and non-containerized environments.

Modern cloud-native environments are increasingly complex, creating infrastructure and application blind spots for DevOps and SRE teams. Engineering teams typically use profiling to spot performance bottlenecks and troubleshoot issues faster. However, most profiling solutions have significant drawbacks limiting adoption in production environments:

Universal Profiling is lightweight and requires zero instrumentation. Enabled by eBPF-based technology, it overcomes the limitations of other profiling solutions by requiring no changes to the application code, making it easier to quickly identify performance bottlenecks, improve time to resolve problems, and reduce cloud costs.

The low overhead of Universal Profiling, less than 1% CPU overhead, makes it possible to deploy in production environments to deliver deep and broad visibility into infrastructure and cloud-native application performance at scale.

For a production application running across a few hundred servers, early results show code optimization savings of 10% to 20% of CPU resources, resulting in cost savings and a reduction of CO2 emissions per year.

Introducing new capabilities to cloud- and developer-first synthetic monitoring

Synthetic monitoring enables teams to proactively simulate user interactions in applications to quickly detect user-facing availability and performance issues and optimize the end-user experience.

Designed to reduce manual and repetitive tasks for development and operations teams, Elastic is introducing the beta of the following innovative synthetic monitoring capabilities available within the current Uptime application for Elastic Cloud customers:

Additionally, a new and intuitive user interface to simplify workflows and make it easier to identify and quickly troubleshoot problems in production is currently under development and planned for future availability.

For more information read the Elastic blog about whats new in Elastic Observability. Additional information about how Elastic Universal Profiling provides visibility into how application code and infrastructure are performing at all times can be found here.

Supporting Quotes

About Elastic:

Elastic (NYSE: ESTC) is a leading platform for search-powered solutions. We help organizations, their employees, and their customers accelerate the results that matter. With solutions in Enterprise Search, Observability, and Security, we enhance customer and employee search experiences, keep mission-critical applications running smoothly, and protect against cyber threats. Delivered wherever data lives, in one cloud, across multiple clouds, or on-premise, Elastic enables 19,000+ customers and more than half of the Fortune 500, to achieve new levels of success at scale and on a single platform. Learn more at elastic.co.

The release and timing of any features or functionality described in this document remain at Elastics sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Read the original here:
Elastic Announces the Beta of New Universal Profiling and Additional Synthetic Monitoring Capabilities to Enhance Cloud-Native Observability -...

Spot by NetApp exec on acquisitions and catalog slimming – TechTarget

NetApp has continued expanding Spot, the all-encompassing portfolio for its data management products, throughout 2022.

The company's acquisitions this year include Fylamynt, a cloud automation startup, and Instaclustr, a database-as-a-service vendor. Terms for both deals, which were focused on rounding out Spot by NetApp, were not disclosed.

The acquisition spree appears to be winding down, however, with Michael Berry, CFO and executive vice president at NetApp, mentioning a potential pause on further acquisitions in the coming months during its fourth quarter and fiscal 2022 earnings call in June.

"From a capital allocation perspective, we expect to hit pause on CloudOps acquisitions for the first half of fiscal '23, as we focus on strengthening our field and customer success go-to-market motions, while integrating our CloudOps product portfolio," Berry said during the call.

In this Q&A, Kevin McGrath, vice president and general manager of Spot by NetApp, spoke about how recent acquisitions will add to the Spot portfolio for customers, how he sees the software suite evolving and what the company's next maneuvers in the cloud might be. McGrath was the former CTO at Spot before NetApp's acquisition of the startup in 2020.

Early analyst sentiment following the acquisition of Fylamynt indicated some optimism about what it can add to the platform. What's the planned implementation for the technology, and what's next?

Kevin McGrath: Fylamynt is going to become Spot Connect and is going to be that connective tissue between everything that we do.

One of the things we've done is we have all these acquisitions [with] all these APIs. Spot Connect is going to be a drag-and-drop system. We want to connect all the different parts of what NetApp is putting together. We're not only going to put [our products] in the same console, but we're going to give you a nice, neat way to connect them -- not only with each other, but with the services you use, like ServiceNow, Jira and Slack.

Are we going to compete with some of the tools a cloud provider is going to offer? Absolutely. Kevin McGrathVice president and general manager, Spot by NetApp

[NetApp] Cloud Insights has a lot of that visibility and optimization that we're going to start bringing into the platform. I think we will have a networking story sooner rather than later, as we start bringing all these [tools] together.

Last year, NetApp announced partnerships with some cloud hyperscalers, such as the debut of Amazon FSx for NetApp OnTap on AWS. However, Spot by NetApp is a challenger to some hyperscaler tools and capabilities. How do you see those partnerships evolving?

McGrath: What the major cloud providers want is more usage. Are we going to compete with some of the tools a cloud provider is going to offer? Absolutely.

I think, in some cases, we're going to step on each other's toes. But in other cases, we're going to prove [to] the cloud provider that if someone uses our tool set, they're going to be a happier customer, a stickier customer and a customer that will eventually scale more on their cloud.

One trend we've seen this year is vendors attempting to sell outside of typical IT silos. Is Spot by NetApp pursuing a similar goal of expanding NetApp's presence in the customer's data management stack beyond storage, albeit with individual offerings within a catalog rather than just one product?

McGrath: I think that's one of the big transitions that you're seeing in the market. Your top-down sell from the CIO into an IT team, that's kind of breaking off. We don't always sell into the main IT department. NetApp CEO George Kurian said he wants to force NetApp to get out of just selling to storage admins.

The people who are going to use data going forward are not necessarily storage engineers. Remember that, at the end of the day, [NetApp OnTap] is not a hardware thing. It's software that can run anywhere. It doesn't have to run on the hardware that we ship to data centers. I think there's a concerted effort to say, 'We are going to adapt to the cloud and not try to get the cloud to adapt to us.'

I don't know if there's one specific area, but these platform engineering teams, these DevOps teams -- they have an impossible job. They get requirements from application teams, finance teams, business teams. As that role expands at more and more companies, we're going to keep solving for their pain points.

We don't want to take away any of the entry points. We want to keep that consumption-based model of cloud. But for our larger customers, we absolutely want to produce a method for them to come in and consume data, and not have to worry about the point solutions underneath. A suite of services in their own way, a more consumable way, as a single SKU.

Editor's note: This interview has been edited for clarity, length and style.

Tim McCarthy is a journalist living on the North Shore of Massachusetts. He covers cloud and data storage news.

Excerpt from:
Spot by NetApp exec on acquisitions and catalog slimming - TechTarget

Tips to achieve compliance with GDPR in cloud storage – TechTarget

Despite its widespread popularity, cloud storage presents inherent risk, especially when businesses use cloud providers that do not give customers the same amount of control over their data as they would with an on-premises data center.

Logically, the best choice for GDPR-compliant cloud storage is a provider that actively protects data privacy, as well as encrypts critical files and other personally identifiable information (PII).

GDPR ensures that organizations based in the European Union and any organization that does business with an EU member nation follow strict protocols to protect personal data. The regulation aims to prevent unauthorized access to personal data and ensures that companies and individuals know where their personal data is, how to access it, and how and when the data is used.

Additional attributes include fines and penalties for data breaches, documentation of activities to ensure data privacy and protection, establishment of a data protection officer (DPO) within GDPR-compliant entities, and regular reviews and audits of GDPR activities.

GDPR compliance is mandatory if the provider has a business relationship with an EU-based organization. Ask the vendor for evidence of GDPR compliance.

Most major cloud vendors are GDPR-compliant since they likely have customers in EU member nations. If this is not the case, personal data owners must ask for consent from visitors to company websites and other resources that note personal data may be processed. Failure to do so may result in financial penalties for noncompliance with GDPR.

Access to secure email is an important way to validate that vendors are GDPR-compliant. Providers should also encrypt all data. Vendors that demonstrate they have no knowledge of a user's personal data are likely to be GDPR-compliant.

GDPR requirements can be difficult to understand and apply. Organizations that store customer data or PII within cloud storage should know relevant GDPR rules and regulations to ensure compliance. Organizations can also look to regulations to ensure their data is compliant with GDPR, even if they store it with a cloud provider.

Organizations that process personal data, such as the cloud vendor, must do so "in a lawful, fair and transparent manner." To achieve this, organizations must do the following:

An organization that processes data must only collect necessary data and not retain it once it is processed. They cannot process data for any reason other than the stated purpose or ask for additional data they do not need. They must ask if personal data can be deleted once it has served its original purpose.

Data owners and data controllers have the right to ask the cloud provider what data it has about them and what it has done with that data. They can ask for corrections to their data, initiate a complaint and request the transfer or deletion of personal data.

Data owners must provide documented permission when a data processor wants to perform an action on personal data beyond the original requirements.

The processing entity or cloud vendor must inform applicable regulators and personal data owners of a data breach within three days. The vendor must also maintain a log of data breach events.

Organizations that plan to switch cloud vendors must design features into the new system that ensure privacy, security and GDPR-compliant management of personal data.

Organizations that process personal data must perform a Data Protection Impact Assessment in advance of any new project or modifications to existing systems that may affect how they process personal data.

If a third party might process data, the organization that processes personal data -- the controller -- is responsible for the protection of personal data. This is also true if the controller transfers data within the organization.

The DPO's responsibility is to ensure personal data is processed safely and securely. They must also ensure compliance with GDPR. The data owner and data processors, such as cloud vendors, can establish this role.

To ensure companywide support for GDPR, data owners and processing entities must make employees aware of the regulations and provide training so that employees know their responsibilities.

The following is a brief list of GDPR-compliant storage vendors, most of which have cloud storage resources:

Protection of personal data is what GDPR is all about, and its regulations are specific about how to protect personal data. Organizations that wish to be GDPR-compliant should have an operational policy, procedures and protocols related to the storage and processing of personal data. They must also be able to document transactions that involve personal data to support the organization's GDPR compliance. Document these activities for audit purposes, and review and update them regularly.

Read this article:
Tips to achieve compliance with GDPR in cloud storage - TechTarget

Google launches storage services with knobs on Blocks and Files – Blocks and Files

Google Cloud has expanded up its storage portfolio, augmenting existing services and launching a dedicated backup and data recovery service for the first time.

Google Clouds group product manager for storage, Sean Derrington, said the services were aimed as much at traditional enterprises as cloud native organizations, as both are looking to build resilient continental scale systems, and of course to drive down costs. A third aim was to support data rich applications, he said, which in most cases still feature a combination of on-prem as well as cloud data, meaning migration was always an issue.

So, in no particular order, Google Cloud Hyperdisk is described as a next generation complement to its Persistent Disk service block storage service, with different implementations for different workloads. Hyperdisk Extreme, for example, will support up to 300k IOPs and 4Gbps, to support demanding database workloads such as HANA.

But, Derrington continued, Were giving customers the option to basically have knobs that they can turn. Say within Hyperdisk Extreme, as an example, if I want to tune my IOPs to a certain level and I want my throughput to a lower level, because my application needs are different. And then I can also set the capacity.

And if customers want to turn all the knobs to 11, they can, said Derrington, while they can also be adjusted over time as applications and workloads evolve.

The service will be rolled out in Q4.

For those who want to twiddle as few knobs as possible, Cloud Storage Autoclass will relieve storage admins of the drudgery of deciding what is hot and cold data and therefore where to keep it, i.e. Googles standard, nearline, cold line, and archive cloud storage tiers.

Whatever tier data is in, it can be recovered in milliseconds, Derrington said. Google has also added Storage Insights to the Google storage console to highlight exactly what is going on within their data, for example, levels of duplication. This is in preview, with general availability in Q4.

As the birthplace of Kubernetes, it may not be a surprise that Google has launched a Backup for GKE service, offering backup and disaster recovery for K8s apps with persistent data. While there are third party services that offer this, Derrington said Google was the first hyperscaler. Again, the service will launch in early Q4.

It has also launched Filestore Enterprise multishare for GKE, which allows multiple pods up to thousands access to the same data, helping optimize storage utilization. It will be rolled out by the end of the year.

With the debut of a GKE backup service, it would seem odd for Google not to also launch a general Backup and Data Recovery service covering Googles VMware Engine and Compute Engine platforms, as well as databases. Which is just what it has done.

This is actually from the Actifio acquisition that we closed in December of 2020. This is now fully integrated into the Cloud Console, said Derrington, and will be available this month.

Derrington said Google customers were perfectly free to continue using whatever backup and DR service they were using previously. We do have an open ecosystem.

See the original post:
Google launches storage services with knobs on Blocks and Files - Blocks and Files

Zesty lands $75M for tech that adjusts cloud usage to save money – TechCrunch

Spending on the cloud shows no signs of slowing down. In the first quarter of 2021, corporate cloud services infrastructure investment increased to $41.8 billion, representing 35% year-on-year growth, according to Grand View Research. But while both small- and medium-sized businesses and enterprises admit that theyre spending more on the cloud, theyre also struggling to keep costs under control. According to a 2020 Statista survey, companies estimate that 30% of their cloud spend is ultimately wasted.

The desire to better manage cloud costs has spawned a cottage industry of vendors selling services that putatively reign in companies infrastructure spending. The category grows by the hour, but one of the more successful providers to date is Zesty, which automatically scales resources to meet app demands in real time.

Zesty today closed a $75 million Series B round co-led by B Capital and Sapphire Ventures with participation from Next47 and S Capital. Bringing the companys total raised to $116 million, the proceeds will be put toward supporting product development and expanding Zestys workforce from 120 employees to 160 by the end of the year, CEO Maxim Melamedov tells TechCrunch.

DevOps engineers face limitations such as discount program commitments and preset storage volume capacity, CPU and RAM, all of which cannot be continuously adjusted to suit changing demand, Melamedov said in an email interview. This results in countless wasted engineering hours attempting to predict and manually adjust cloud infrastructure as well as billions of dollars thrown away each year.

Melamedov founded Zesty with Alexey Baikov in 2019, after the pair observed that cloud infrastructure wasnt keeping up with the pace of change in business environments. Prior to co-launching Zesty, Melamedov was the VP of customer success at Gimmonix, a travel tech company. He briefly worked together with Baikov at big data firm Feedvisor. Baikov was previously a DevOps team lead at Netvertise.

Image Credits: Zesty

At the core of Zesty is an AI model trained on real-world and synthetic cloud resource usage data that attempts to predict how many cloud resources (e.g., CPU cores, hard drives and so on) an app needs at any given time. The platform takes actions informed by the models projections, like automatically shrinking, expanding and adjusting storage volume types and purchasing and selling public cloud instances.

To increase or decrease storage, Zesty transforms filesystem volumes in the cloud into a virtual disk with a series of multiple volumes, each of which can be expanded or shrunk. On the compute side, the platform collects real-time performance metrics, buying or selling cloud compute in response to app usage.

The primary tools we use to design efficient automation of cloud resources come from the fields of decision analysis and resource management. Many of the classical techniques used to solve such problems can be slow and not suitable for real-time decision making, where fast response to change is critical, Melamedov said. With Zesty, organizations dramatically reduce cloud costs and alleviate the burdensome task of managing cloud resources in a constantly shifting business environment. Because in a world thats always changing, Zesty enables the infrastructure to change right around with it.

Those are lofty promises to be sure. But Zesty has managed to grow its customer base to over 300 companies, including startups Heap, Armis and WalkMe, suggesting that its doing something right.

[T]he pandemic create[d] a whole new level of demand for our solutions and we have been fortunate to see huge demand growth for our products, Melamedov said. Companies were not only looking to save money, but they were [also] forced to cut staff. Freeing up DevOps and other operational personnel became critically important, and thats where we came in freeing them up from having to babysit the cloud and constantly be on call to adjust cloud resources as needs shifted. The current [economic] slowdown as well has only helped showcase our value even more, now that we have dozens of case studies we can share that show quick and easy return on investment.

Zestys challenge will be continuing to stand out in a field of rivals. Microsoft in 2017acquiredCloudyn, which provided tools to analyze and forecast cloud spending. Then, in 2019, Apptio snatched up cloud spending management vendorCloudability, while VMware, NetApp and Intel bought CloudHealth, Spot (formerly Spotinst) and Granulate, respectively, within the span of a few years. Elsewhere, ventures such as Granulate, Cast AI, Exotanium and Sync Computing have raised tens of millions of venture capital dollars for their cloud spend-optimizing tech.

Melamedov wouldnt go into specifics around Zestys financials. But he expressed confidence in the companys prospects, revealing that Zesty has reached an annual run rate in the tens of millions.

Go here to see the original:
Zesty lands $75M for tech that adjusts cloud usage to save money - TechCrunch