Category Archives: Cloud Servers

3 Approaches to Leveraging Open Source in the Cloud – ITPro Today

The cloud and open source software have long been frenemies at best. Although a number of public cloud services are based, at least in part, on open source platforms or tools, the cloud services themselves are very much not open source.

That doesn't mean, however, that using the cloud means missing out on the benefits of open source. With the right approach, you can enjoy open source and the convenience of cloud computing at the same time.

Related: Four Major Open Source Hybrid Cloud Platforms

When cloud computing first emerged, it spawned more than a little worry among advocates of free and open source software.

GNU founder Richard Stallman, for example, warned that cloud platforms "give someone else power over your computing."

The point he was making was that, when you use a cloud service or software-as-a-service (SaaS) application, you're using a computing environment that is controlled by an external vendor. Cloud vendors very rarely publish the source code for their services and applications. Even if they did, users would not be able to modify the code to change the way the service works, control how it manages their data, or enjoy other basic freedoms associated with running open source software on one's own computer or server.

One way to solve this dilemma is to build a private cloud using an open source platform, like OpenStack or CloudStack.

This is a great idea if you have the resources necessary to set up and manage a cloud computing environment on your own. But that's a fair amount of work. It also requires you to acquire your own hosting infrastructure. You miss out on the convenience, limitless scalability, and CapEx-free nature of the public cloud.

There's another approach to running open source software in the cloud that delivers most of the benefits of open source and most of the benefits of public cloud: using public cloud infrastructure to host open source software that you manage yourself.

You can, in other words, run whichever open source applications you want such as an Apache HTTP Server, WordPress, or Elasticsearch on an AWS EC2 instance or an Azure Virtual Machine.

You won't totally control the underlying infrastructure, of course. You also can't stop the public cloud provider from collecting data about what you're doing on its servers. In these respects, you won't leverage quite as much privacy and extensibility as you would if you ran open source on your own private server.

The trade-off, however, is that you can scale your host infrastructure up virtually without limit. You also only have to pay for the hosting resources you actually use, and you don't have to buy any servers upfront to run your applications.

A third approach is to run open source software in the cloud using a managed service from a cloud vendor. For example, you could run Kubernetes via AWS EKS or Azure AKS. Or, you could use AWS OpenSearch instead of setting up Elasticsearch (and related software) yourself.

The benefit of open source as a managed service in the public cloud is that it's simple and convenient. You don't have to provision infrastructure or install open source on it yourself.

On the other hand, you lose out on all of the flexibility that open source would otherwise confer. You can only use your software in ways that your cloud vendor supports. Your ability to modify the software's configuration is usually limited. You certainly can't modify the software's source code. You end up, in other words, with the type of "software as a service substitute" scenario that people like Stallman warn about.

On the upside, one could argue that using open source as a managed service in the public cloud could be a stepping-stone to using the same open source platforms in ways that grant users more freedom. If you run EKS, for example, maybe you'll eventually decide to deploy Kubernetes yourself, instead of relying on a managed service. Or you might one day move from OpenSearch to a self-managed ELK stack.

No matter how you slice it, running open source in the cloud as opposed to on your own private infrastructure comes with some drawbacks. It may entail more effort than other cloud-based deployment options. And you may have less control over your software and data.

But given the different deployment approaches out there, it's usually possible to run open source in the cloud in a way that lets you achieve most of your goals, while minimizing the drawbacks. You just need to select the right strategy.

Read this article:
3 Approaches to Leveraging Open Source in the Cloud - ITPro Today

Top 10 PaaS providers of 2022 and what they offer you – TechTarget

PaaS is a cloud computing platform designed to enable organizations to deploy, provision and run applications without needing to build out the underlying infrastructure. In essence, the cloud provider delivers the infrastructure, while the organization either provides its own application or uses an application that has been made available by the cloud provider.

There are many PaaS providers to choose from, but these providers aren't created equal. Although most providers tend to offer the same basic set of services, many providers also have their own unique feature offerings and limitations. For example, a PaaS provider might choose to support Python, but not Java. Providers can also differ dramatically from one another in terms of pricing. As such, carefully assess the various PaaS providers to see which will best meet your needs before settling on a provider. To help, here's a breakdown of the top PaaS providers of 2022:

AWS Elastic Beanstalk is Amazon's native platform for deploying web applications. The service supports Java, .NET, PHP, Node.js, Python, Ruby, Go and Docker. Code can be hosted on Apache, Nginx, Passenger or IIS web servers.

Like many of the other PaaS options discussed here, Elastic Beanstalk is designed to act as a managed service that frees you from having to build out infrastructure or perform any complex configurations. It automatically handles scaling, load balancing, health monitoring and capacity provisioning. Amazon also makes it possible to use CPU metrics to scale an application up or down based on demand. One thing that makes Elastic Beanstalk unique is that although it's designed to act as a managed service, it also enables you to take manual control over the underlying infrastructure -- if you wish, for example, to configure a workload to run on a specific EC2 instance type.

Engine Yard is a fully managed DevOps platform that is designed to simplify AWS, so you only need to write code and then push the code to the remote repository. Engine Yard handles all of the complexities associated with tasks such as containerizing the application and running the application on a Kubernetes cluster.

Besides handling the basic tasks associated with managing Kubernetes, Engine Yard also handles a variety of other low-level tasks, such as patch management, backups and automatically scaling an application based on its performance requirements. Applications that are hosted with Engine Yard are tied to Grafana, which provides basic resource metrics. Additionally, Engine Yard can provide automatic alerts for application failures and other conditions.

Google App Engine is a platform that enables organizations to build their own application on top of a serverless platform. Google App Engine is fully managed and supports Node.js, Java, C#, Go, Python and PHP. Although these are the officially supported languages, Google supports bring your own language, and also lets customers use any library or framework through containerization.

Google App Engine is designed so you can build applications without regard for the underlying infrastructure. The serverless platform is fully managed and lets organizations scale their applications without having to do anything to the underlying infrastructure or perform complex configuration tasks.

Heroku is a container-based PaaS environment. Applications run inside of smart containers, which Heroku calls dynos, and Heroku handles all of the infrastructure requirement for those containers. This includes logging, security, failover and orchestration. Containers can be scaled horizontally or vertically, and application metrics are available to help monitor application response times.

Heroku offers PostgreSQL as a service, but applications can also use any of the numerous add-ons that Heroku makes available. Some of these add-ons include New Relic, MongoDB, SendGrid, Searchify, Fastly, Papertrail, ClearDB and MySQL. Heroku also supports the use of in-memory key value datastores.

From a developer standpoint, some of the more compelling features include GitHub integration and code and data rollback capabilities.

Like other major cloud platforms, such as AWS and Azure, IBM Cloud offers both PaaS and IaaS capabilities. There are two main PaaS options offered within the IBM Cloud.

The first of these services is IBM Red Hat OpenShift on IBM Cloud. This service is intended for those who want to develop cloud-native applications. This service can be used to provision and scale workloads and automate the update process. IBM also enables you to deploy managed, highly available clusters with a single click.

IBM's other platform is IBM Cloud Pak for Applications. This service is designed to help organizations modernize their existing applications.

The Mendix Application Platform as a Service (aPaaS) is designed to simplify the process of deploying applications through single-click deployment. Although many PaaS tools simplify the process of deploying an application to the cloud, Mendix is unique in that it supports public and private clouds, as well as on-premises environments.

Another difference between Mendix aPaaS and some of the other platforms is that Mendix doesn't concentrate solely on the deployment process. It also seeks to expedite the application development process. Mendix offers reusable application components so that you don't have to build applications from scratch. As such, Mendix aPaaS could be described as a low-code environment.

Like other hyperscalers, Microsoft Azure includes numerous services that could be classified as PaaS. One such service is Microsoft Azure Pipelines. Azure Pipelines is designed for extreme flexibility. It enables you to build applications with Node.js, Python, Java, PHP, Ruby, C/C++ and of course Microsoft .NET. Applications can be run in parallel on Windows, Linux and MacOS. The service also enables the development of iOS and Android apps.

Although it's easy to assume that Azure Pipelines is designed to deploy apps to the Azure Cloud, Azure Pipelines works with all of the major clouds including Azure, AWS and Google Cloud. The service also enables you to build and push images to container registries such as Docker Hub.

Red Hat OpenShift is another Kubernetes-based PaaS option. Besides automating Kubernetes, OpenShift is designed to help organizations build applications more quickly. The software's source-to-image capabilities enable you to go straight from the application's code to a container.

Another nice thing about Red Hat OpenShift is that Red Hat provides an easy-to-use management console that lets you see and manage all of your Kubernetes clusters at once. Additionally, OpenShift is designed to provide a consistent experience, regardless of where the underlying OS is running.

VMware Cloud Foundry is a PaaS tool designed to simplify the process of running code on a Kubernetes cluster. One of the things that differentiates Cloud Foundry from some of the other Kubernetes PaaS tools is that Cloud Foundry is Kubernetes native. It enables apps to be run as OCI compliant container images and also works with other open source projects such as Envoy, Fluentd and Istio.

Cloud Foundry is designed to help you get started in less than 10 minutes, and because Cloud Foundry is lightweight, it can be containerized. This means you can choose between the Diego container scheduler or the standard Kubernetes scheduler.

Wasabi is a cloud storage provider and isn't technically a PaaS platform. However, the Wasabi cloud can play a role in an organization's PaaS use. Even though PaaS seeks to simplify application deployment by offering infrastructure as a managed service, application data still must be stored somewhere. Although you can store data on the cloud that is hosting the application, storing data on the Wasabi cloud might be a less costly option.

Wasabi has partnerships with exchange providers such as Equinix, Flexential, Limelight Networks and Megaport. These partnerships enable Wasabi to offer direct, high speed connectivity to hyperscalers such as AWS, Google and Azure.

See the rest here:
Top 10 PaaS providers of 2022 and what they offer you - TechTarget

MSFT: Cloud Computing in 2022: The Complete Investors Guide – StockNews.com

According to Gartner, the global cloud computing market will continue growing at a 17.5% annual rate over the next decade. Given its current size of $397 billion, this means it will be more than a trillion-dollar industry within the next 5 years.

For investors, this type of growth and scale means there are many different investment opportunities:

This type of growth means there are plenty of opportunities for investors in cloud computing. This report will provide a broad overview of the industry, examine some of the most promising niches, and then provide some insight on some of the most intriguing stocks in the sector: Google (GOOGL), Microsoft (MSFT), Workday (WDAY), Veeva Systems (VEEV), and SAP (SAP).

What is Cloud Computing?

At one time, heavy computing power was an expensive and rare asset only available to big corporations and universities. Heavy computing power is a term used to describe all the processing power from servers in data centers that are necessary to operate the cloud. And, technologies of the future like AI, ML, virtual reality will all require even more processing power.

As costs have fallen over the past couple of decades, its increasingly become a commodity in that the cost of distribution is more than the cost of production. Now, even distribution costs have also declined while capacity is improving.

Due to these developments, any business can access this heavy computing power and leverage technology to achieve their goals at much less cost. All types of applications requiring computing power are now done through the cloud. Examples include web and application hosting on AWS or Azure, customer relationship management salesforce with Salesforce, or storing photos and videos with Google Drive or Dropbox.

As the costs have dropped, the market has expanded. And, its enabled developers to build applications on top of these cloud platforms and applications.

Its also had notable real-world impacts. For example, the majority of apps and services on smartphones are enabled by cloud computing. The cost of startups has also precipitously declined. Previously, companies would have to invest significant sums in buying or renting servers. Now, companies can access these on a per-use basis, turning IT into a flexible cost rather than a fixed cost.

Benefits of Cloud Computing

Cloud computing makes a companys IT, technology, and operations more powerful, agile, and flexible by enabling the delivery of enterprise-level solutions on demand to increase customer engagement and improve operational efficiency.

Cloud computing encompasses a wide variety of categories. Now, all types of IT services are available on the cloud, from basic infrastructures such as compute and storage to application development platforms and specific software applications. These services are hosted remotely and accessible through the Internet.

Cloud computing enables companies to scale their IT stack with minimal difficulty. Before, IT services required servers, equipment, and people to operate and manage which was a capital and time-intensive process. For many firms, this constrained growth due to the cost, complexity, and challenge of scaling IT resources.

Now, these services can be scaled according to need in a frictionless manager. As a result, employees can use and operate enterprise-level software on a per-user or per-use basis. Future disruptive technologies are being built on top of and integrated with existing cloud services. Notable examples include autonomous driving and artificial intelligence.

In addition to the attractive economics of cloud computing, cloud systems require less upkeep as maintenance and security needs are handled by cloud computing providers. Therefore, companies have less need for dedicated IT personnel who would spend time on adding new servers and upgrading equipment.

Over time, cloud systems have evolved and the number of use cases has rapidly multiplied. The three major categories of cloud computing are Software as a Service (SaaS), Platform as a Service (PAAS), and Infrastructure as a Service (IaaS).

See the table below for information on each:

Every cloud computing company falls into one of these three categories. However, PaaS and IaaS are dominated by large companies, while most startups fall into the SaaS category.

This is due to the economies of scale and network effects of these businesses. More users mean more demand for services which leads to more data and iterations to improve the product. This scale leads to lower costs and higher barriers to entry. As a result, these businesses tend to have high rates of recurring revenue and impressive gross margins which have been rewarded by the market over the last decade with high multiples.

Early Stages of Growth

Given the quantitative and qualitative advantages of cloud computing, its not surprising that companies are migrating to these platforms, services, and applications. Companies no longer have a choice and must do so to remain competitive.

However, this process remains in its early stages. As of February 2021, 92% of S&P 500 companies had a job posting for a cloud migration specialist, indicating that cloud spending remains a priority. Even despite its impressive growth, the average IT environment today, is still majority non-cloud, although its expected to shrink in the years ahead.

Further, many types of investments made by companies come with uncertainty in terms of getting a return on investment. Cloud spending is an exception as costs go down but capabilities increase.

Investing in Cloud Computing

In 2022, the cloud computing industry is estimated to be worth just under $400 billion and cross the trillion-dollar threshold by 2026. Investors have many choices when it comes to investing in the sector. One option is to buy a broad cloud computing ETF like the WisdomTree Cloud Computing ETF (WCLD) or the Global X Cloud Computing ETF (CLOU).

Another route is to invest in cloud infrastructure stocks like GOOGL, Amazon (AMZN), or MSFT. These companies are the leaders in the industry and are unlikely to be disrupted given their scale and early-mover advantage. The vast majority of cloud applications are built on top of these platforms, so these companies will benefit from the industrys overall growth. Another approach is to focus on the PaaS or SaaS companies that are building more targeted solutions for their customers.

Below, we will analyze 5 cloud computing stocks that are a representative sample of the total industry:

Google (GOOGL)

GOOGL is the clear leader when it comes to online search with over 80% market share. From this, it derives strong revenue growth and cash flow by selling ads. Using its search proceeds, Google has made investments to grow in new areas. Among these, Google Cloud is one of its most successful bets.

Google Cloud is the third largest cloud provider in the world with an 8% market share trailing only Microsofts Azure and Amazons AWS. However, its the 2nd fastest-growing major cloud provider with a 45% growth rate. Interestingly, Google Cloud is built on the same infrastructure that it uses for the delivery of its own products and services such as Gmail, YouTube, Google Docs, etc.

Google Cloud counts a large number of premier companies as customers including Shopify (SHOP), Paypal (PYPL), Twitter (TWTR), and Goldman Sachs (GS). Its also been aggressively making partnerships with telehealth companies to deploy their products on top of Google Cloud.

Although Google Cloud remains a small part of the larger company, it does make up a large share of recent revenue growth. Given the high margins of cloud businesses, this growth will start filtering to the bottom line as well.

The POWR Ratings are constructive on Google as well as it has a B rating which translates to a Buy. B-rated stocks have posted an annual performance of 21.1% which compares favorably to the S&P 500s annual performance of 8.0%. The POWR Ratings also evaluates stocks by different categories including Value, Growth, Momentum, Stability, Sentiment, Quality, and Industry. To see GOOGLs component grades, please click here.

Microsoft (MSFT)

MSFT is currently the leader in the Cloud Wars with $18.3 billion in revenue. In fact, the company has a commanding lead given that second place Amazon and third place Googles cumulative cloud revenue is $17.3 billion. Its also the choice of large enterprises given pre-existing relationships with Microsoft.

Of course, MSFT infamously pivoted to the cloud in 2013 upon the selection of Satya Nadella as Microsoft CEO. This decision was pivotal for MSFT as its stock price had been moribund for much of the past decade. However, since then, Microsoft has been the best-performing stock in the S&P 500 and is now the second-most valuable company in the world.

Microsoft essentially turned many of its software products into a subscription which resulted in higher lifetime customer value, higher margins, and more opportunities to upsell. However, the crown jewel of its business is Azure.

MSFT has been layering targeted solutions for its cloud customers in all types of areas. Currently, it counts over 200 specific processes, while there are countless more that have been produced by the developer ecosystem. Some of the most popular include data analytics, AI, machine learning, and visualization.

Of course, MSFT has a lot going for it beyond the Cloud, but it did account for nearly 50% of the companys revenue growth in the last quarter. The POWR Ratings have a constructive outlook on MSFT as it has a B rating which translates to a Buy. The POWR Ratings are calculated by taking into 118 different factors each with its own weight.

The POWR Ratings also evaluates stocks by different components. In terms of Sentiment, MSFT has an A rating which isnt surprising given that the consensus price target is $363, implying 26% upside. To see more of MSFTs component grades including Value, Momentum, Stability, Growth, Quality, and Industry, click here.

Workday (WDAY)

WDAY provides enterprise cloud applications with offerings that include financial management applications, cloud spending management solutions, and Workday applications for planning. It is the leading provider of software-based solutions for human resource needs.

WDAY has been one of the best performers of the last decade with a 357% gain since its IPO in 2013. Its not surprising when considering that companies are increasing spending on their IT systems, software, and cloud systems at a double-digit rate which is forecast to continue over the next decade.

Additionally, WDAYs products allow companies to save money and become more efficient. Newer companies are able to have smaller HR departments that are able to handle just as much workload and responsibilities as larger ones. We also learned during the pandemic that these software systems are more essential for a companys operations than office space.

Once companies choose a software or cloud provider, they are unlikely to change given the cost and complexity of changing systems. Further, once companies have people on their platforms, they are able to unlock more opportunities for monetization. This is reflected in WDAYs high rates of recurring revenue.

The stock has been punished along with other tech and growth stocks, leading to a 27% decline from its all-time high in November 2021. However, WDAY should be bought on the dip due to its strong earnings and business momentum.

In its last quarter, the company had a 21% increase in revenue with 90% of it coming from recurring revenue. It also achieved an important milestone with positive EPS that is expected to grow by 45% next year.

Given these positives, its not surprising that WDAY has an overall B rating which equates to a Buy. Its also quite strong in terms of component scores with an A for Growth and a B for Quality. Click here to see more of WDAYs POWR Ratings including component scores for Momentum and Value.

Veeva Systems (VEEV)

VEEV is a cloud computing and enterprise software company for the healthcare, pharmaceutical, and life sciences industries. It provides software solutions for the unique needs of companies in these industries, from meeting regulatory standards to conducting clinical trials to managing operations.

VEEV has very favorable economics as the healthcare sector is massive and always expanding, while cloud computing spending is rising. Further, VEEV has high margins and rates of recurring revenue. There are very few competitors in the space given the regulatory barriers and trust issues required for such sophisticated and sensitive work.

For the full year, VEEV is expected to earn $3.69 per share on $1.8 billion in revenue, a 26% and 28% increase, respectively. Next year, this growth is expected to continue as analysts are forecasting 8% earnings growth and 13% sales growth. From the summer of 2021 to February of 2022, the stock has pulled back more than 40%, creating a nice entry point given its long-term prospects.

Thus, its not surprising that VEEV has an overall B rating, which translates to a Buy in our POWR Ratings system. It also has an A for Quality as its one of the leading stocks in a large total addressable market with only a handful of competitors. VEEV also has a B for growth makes sense given its intersection of two large and growing markets healthcare and cloud computing. Click here to see more of VEEVs POWR Ratings including grades for Value, Momentum, and Stability.

SAP (SAP)

SAP is a leading company in the enterprise software space and one of the originators of the entire category. The company operates through three segments: The applications, technology & services segment, and the SAP Business Network. Over time, SAP has branched out from providing enterprise resource planning software to help companies optimize their inventory, supply chains, and production process to all sorts of operations including analytics, CRM, marketing, business processes, etc.

One reason that investors should consider having SAP in their portfolios is that once companies start using their software, they are unlikely to change, especially as employees become comfortable and reliant on it

Switching to a new software system would incur huge costs and disruptions as data would have to be transferred and employees would have to be re-trained. Additionally, since SAP is known as one of the top enterprise software companies, corporate managers are less hesitant to spend money on their products.

SAP is a mature company, but its still managing to grow at an impressive rate as analysts forecast 11% earnings growth next year. Further, the company is reasonably valued with a forward P/E of 19, a 2% dividend yield, and 21% profit margins. This makes the stock an intriguing buy the dip candidate following its 25% drawdown during the tech selloff in 2021.

SAPs POWR Ratings reflect this promising outlook. The stock has an overall A rating, which equates to a Strong Buy in our proprietary rating system. It also has strong component scores including a B for Sentiment. Currently, 4 out of 7 analysts covering the stock have a Buy rating, while none have a Sell. They also have a consensus price target of $152, implying 21% upside. To see more of SAPs POWR Ratings, click here.

MSFT shares were trading at $295.97 per share on Friday afternoon, up $1.38 (+0.47%). Year-to-date, MSFT has declined -11.82%, versus a -8.05% rise in the benchmark S&P 500 index during the same period.

Jaimini Desai has been a financial writer and reporter for nearly a decade. His goal is to help readers identify risks and opportunities in the markets. He is the Chief Growth Strategist for StockNews.com and the editor of the POWR Growth and POWR Stocks Under $10 newsletters. Learn more about Jaiminis background, along with links to his most recent articles. More...

View post:
MSFT: Cloud Computing in 2022: The Complete Investors Guide - StockNews.com

Storage Is Going Have To Deal With Clouds And Edges – The Next Platform

While on-premises datacenters are strategic to large enterprises, and will be for the foreseeable future, hybrid clouds and the edge are also an increasingly important part of the IT platform portfolio. But the road out of the datacenter and into the future with clouds and edges is not always an easy one for companies to navigate.

The portability of apps across highly distributed locations, and keeping track of the data and the cost of deploying hybrid clouds, are just some of the challenges facing organizations as they expand their workloads and their data out into clouds and edges.

It has become clear that multicloud, multi-datacenter and edge environments will become increasingly common in order to meet the demands of data gravity and best serve end users and data analytics teams, Radhika Krishnan, chief product officer at Hitachi Vantara, tells The Next Platform. But this approach also comes with tremendous complexity as customers need to manage and integrate with multiple different cloud operating models for their applications, data and infrastructure.

Hitachi Vantara is looking to ease the transition with a host of new products and services aimed at delivering the agility and scalability that are central to hybrid and private clouds. Its part of a larger push by the company which came to being in 2017 through the merging of Hitachi Insight Group, Hitachi Data Systems and Pentaho and the addition later of Hitachi Consulting to evolve into more of a solutions company offering hardware, software and services to enterprises that are adopting hybrid cloud strategies.

Its similar to evolutions that other traditional datacenter hardware makers, from Dell Technologies to Hewlett Packard Enterprise to Lenovo to Cisco Systems, are undergoing. A key is to understand that IT will be a single environment that stretches across multiple locations, Krishnan says.

The datacenter is not going away, and the thinking that public cloud is for agile, cloud-native workloads and datacenters are for traditional workloads is not a reality for many customers that need to modernize mission-critical workloads and manage sensitive data, she says. Customers are looking for simple and consistent hybrid cloud operating model that has the cloud-like agility and automation and that can seamlessly connect their applications, data and infrastructure across on-prem, near-cloud, and public cloud.

Delivering such capabilities is key to vendors like Hitachi Vantara. According to a report last year from IT management company Flexera, 92 percent of enterprises have a multicloud strategy, while 80 percent are embracing hybrid clouds. In addition, 31 percent spend more than $12 million annually in public clouds.

Hitachi Vantara is leaning back on the storage expertise from its Hitachi Data Systems days for part of the product offerings. The company is unveiling Virtual Storage Software Block (VSS Block) a software-defined data platform that adds to its larger virtual storage platform to include cloud-native applications running on X86 servers. It leverages the vendors Storage Virtual Operating System (SVOS) and provides a single data plane for mid-range, enterprise and software-defined storage.

It also has Hitachis Polyphase Erasure Coding to improve data read performance and efficiency and supports mirroring and erasure coding. Its compatible with storage that use iSCSI and Fibre Channel host connections.

In addition, the company is rolling out its VSP E1090, a new NVM-Express mid-range storage array that includes scale-out capabilities for virtual storage. It got low latency of 41 microseconds, 8.4 million IOPS performance, data-in-place migration and fast install that organizations can do themselves. In addition, Hitachis Replication Plug-In for Containers automates replication between Kubernetes clusters and storage systems in different sites.

Hitachi Vantaras Ops Center Clear Sight is a cloud management tool that uses artificial intelligence (AI) that brings cloud-based report and analytics to the Hitachi Virtual Storage Platform (VSP). In addition, Hitachi Cloud Connect offers a near-cloud tool that integrates into the public cloud and extends the data fabric, supporting cloud applications from public cloud providers.

Companies can deploy in the cloud the same enterprise-class security, reliability, scalability and resilience that VSP delivers in their own datacenter, Krishnan says. They get management, monitoring and predictive analytic capabilities to properly control their entire storage environment, regardless of location. Even deployments that span multiple locations worldwide and a companys datacenters are empowered to from a single location.

The vendor is using its Unified Compute Platform RS (UCP RS) software-defined hybrid cloud platform to deliver a cloud in a box. Its powered by VMwares Cloud Foundation with Tanzu VMwares Kubernetes portfolio to drive consistency between on-premises and public cloud environments. Organizations can deploy apps on virtual machines or Kubernetes containers through a single management experience.

The vendor is building off a partnership it has had with VMware for more than two decades well before the introduction of Hitachi Vantara offering integrations with such tools as vCenter Server, vRealize cloud automation, vSAN, VMware Cloud and Site Recovery Manager, Krishnan says.

At the same time, the joint offering created with Cisco Hitachi Adaptive Solutions now includes Hitachi Vantaras latest VSP storage technology and supports Ciscos new UCS X-Series Compute modular system, all of which is managed by Ciscos Intersight cloud platform.

Hitachi Vantara also is announcing its Application Reliability Services, a mixture of cloud consulting and management services for managing and automating cloud workloads. The services include Site Reliability Engineering (SRE) principles and AI-powered automation of the software development lifecycle and workload management. The company claims the services will improve application reliability and the time to detect and recover from faults by more than 25 percent each and improve the change failure rate by 15 percent.

Enterprises have rapidly migrated and modernized cloud workloads and have created DevOps teams to accelerate the delivery of new software features, Krishnan says. A problem is that they havent modernized their IT process as quickly, which can lead to reliability issues. She adds that reports are showing almost 40 percent of organizations are seeing app downtime and 90 percent of them are pointing to migration, security and troubleshooting issues.

Hitachi Vantara is aiming to address this with an SRE-led approach to integrate operations with DevOps by using software engineering to automate and simplify workload management while designing for application reliability, Krishnan says.

Read more:
Storage Is Going Have To Deal With Clouds And Edges - The Next Platform

Logicdrop, Vaadin Tout Cloud Native Java Runtime Quarkus The New Stack – thenewstack.io

When the cloud began to take over enterprise application hosting, many developers also found that they needed new technologies, languages and databases to do the job, hence the growth of JavaScript, NoSQL and Kubernetes.

Alex Handy

Alex is a technical marketing manager at Red Hat. In his previous life, he cut his teeth covering the launch of the first iMac before embarking upon a 20-plus-year career as a technology journalist.

Now that the cloud is a permanent fixture inside enterprise development shops, its only natural to bring legacy applications to the cloud. But deciding just how to do that remains an issue of contention and discussion for many teams.

For enterprises with a large variety of Java applications in their environments, Spring Boot has long been the path forward to the cloud for enterprise Java users. But over the course of 2021, Quarkus, the Kubernetes-native pure Java stack, began to gain momentum as an alternative path to the cloud for Java applications.

Now, in 2022, that growth is spurring Quarkus to become one of the most popular topics for Java developers. Its also allowing companies to adopt a new stack.

That includes developers like KimJohn Quinn, co-founder of Logicdrop. His company makes a business automation platform that abstracts complexity away from users, creating a low-code platform for building business rules in the cloud.

Quinn said his company has been using Java since it was founded in 2010. He said he and the team were attracted to the productivity and time savings that Quarkus provides.

Quinn said the team at Logicdrop had adopted Quarkus less than a year ago. In six months, we ported almost 70% of our platform over to Quarkus between two engineers doing most of that work.

Quinn is now a full believer in Quarkus and has begun planning the future of the companys Java applications through its lens.

Where at one time we seriously were considering other languages to replace most of our existing Spring Boot-based platform, we now think Quarkus is the future and have standardized on it as our primary stack. Quarkus lets us use the Java we all know and love as well as accepted standard APIs. We have a much more cohesive microservice environment that developers of any skill set are comfortable working in, it fits very well into our CI/CD process, configuration is simple and straightforward, and the developer tools are solid. Adding icing to the cake, we get native executables comparable to, if not smaller than, other alternatives such as Node, Python or C# to boot. Quarkus has been a refreshing change, and it really does make developing in Java fun again.

The Java world is rich with frameworks and APIs each having their own merits, said Quinn. For any developer, this plethora of choices can be overwhelming, especially when building a product that needs to be performant, maintainable and flexible. Spring is almost always the obvious go-to choice because of the richness of its libraries, the length of time it has been around and the familiarity many developers have with it, but it is those very reasons we sought out an alternative. We wanted the simplicity of Google Guice for DI/IoC but also a robust platform with enough supporting libraries to meet our needs in a more conventional Java way. Quarkus literally has everything we need.

Since Spring Boot has a rich battle-tested ecosystem supporting it, the company had hoped that it would enrich the development process and simplify integration with various Java and cloud technologies, making it faster and easier.

In the end, as our platform grew, the combination of opinionated views on configuration, injection magic, and very seldom definitive answers on how to perform everyday tasks beyond the obvious, we found ourselves dealing with an increasingly complicated and bloated system, Quinn said.

Our core product, originally based on Spring Boot, bent Drools to our will before Quarkus and Kogito were publicly available. Later we added new technologies such as Reactive streams, Camel integration and messaging, he said.

Still, a good portion of our platform, probably like many others, relied on common boiler-plate approaches for security, MVC (modelviewcontroller) and CRUD. Spring was overkill in terms of what we needed versus what we used versus how we used it. Further complicating this was our head-on adoption of Kubernetes and Knative. Using Spring Boot, our container sizes and startup times were too heavy to play nicely in our environment.

That was one of the major victories from moving to Quarkus, said Quinn.

When we made the jump to Quarkus, it dramatically changed how we did things for the better, and it played with Kubernetes and Knative out of the box. Everything we needed, and more, was available in the Quarkiverse, and in short periods of time, we could prototype features that would have normally taken us much longer to just get our arms around. said Quinn.

Quinn said that Quarkus is now the basis for the companys core application and some of its other projects as well.

We shifted to Quarkus and even in our early experimentation, we started to see a dramatic improvement in everything. We were going from one- to two-week first merge cycles to a matter of hours. Even to this day, we are pushing things out in hours. Features are flying in and out lightning fast, and weve been able to bring in developers who were formerly Node developers and Python developers, and we have a common understanding across our whole Quarkus platform by everybody. Theres productivity across the board. Quarkus has let us focus on what we need to do rather than how to get there first, said Quinn.

Quinn particularly likes the way that Quarkus just works, something not necessarily familiar to Java developers.

Getting set up was no problem. What pushed us over the top coming from Spring Boot: We took the OpenID Connect extension, hooked it into Keycloak and it worked. As an example, we said Lets throw it into Auth0 and see what happens. We were sure it was going to explode. We sent it, and I got a 401 HTTP error. Thats exactly what Im looking for. We said Lets get a token, and it logged in and executed. When a single property changed, we bounced between Keycloak and Auth0 and everything ran, said Quinn, explaining how easy it was to tie the Quarkus landscape into Kubernetes authentication procedures.

Matti Tahvonen, senior developer advocate at Vaadin, said Quarkus has increased productivity among its developers as well. The company offers a web framework for Java applications, enabling modern UX development for modern browsers while still using Java behind the scenes.

Because Vaadin offers a UI framework for Java developers, they can see the trends among them fairly easily, and it sees Quarkus gaining steam.

To date, Spring Boot has the deepest integrations with Vaadin, created because Spring Boot was the most popular path to the cloud for many Java developers. Vaadin also has lots of users on Java EE servers, as well as some companies that want to build their own stack on plain servlet containers. Lately there have been requests for Quarkus as well, Tahvonen said.

But the reason Vaadin is now providing official integrations for Quarkus is not simply because customers are asking for it the company had already chosen to build its big product with Quarkus as well. Indeed, some customers had already built community integration tools for Vaadin and Quarkus before Vaadin had a chance to build its own. The demand is there, said Tahvonen.

The customers he has seen transitioning to Quarkus are coming mostly from Java EE servers.

So if they have been using JBoss in the past, now they are looking into Quarkus as a replacement for Spring Boot because Spring Boot has totally changed how people package and deploy their applications, and Quarkus is doing something similar, so the server is part of the application and not the other way around, he said.

And Quarkus is pure Java, like Vaadin, another thing Tahvonen and his team like about it.

Quarkus and Vaadin are a great combination, said Tahvonen. Vaadin is pure Java, only Java. Theres no XML, no HTML, no CSS, unless you want to get the lower abstraction level. You can work in one single language. The biggest benefit of Vaadin is that you dont need to do context switches between languages and execution platforms.

While Java may be viewed as an older language, Quarkus is helping companies around the world modernize their core applications for cloud deployment. Decathlon was able to ramp up to Quarkus from Spring Boot in a single week. Abraxas used Quarkus to build the backbone of its new tax solution for Swiss government tax agencies. Vodafone Greece migrated dozens of applications and improved performance over Spring Boot.

Quarkus offers a host of modern luxuries for the enterprise Java developer. From faster boot and REST response times, to smaller memory footprints, Quarkus can be used in containers without overwhelming the host server with mundane and traditional Java overhead. And its open source, so developers can contribute to it if they want to give back to the community.

Feature image via Pixabay

Go here to see the original:
Logicdrop, Vaadin Tout Cloud Native Java Runtime Quarkus The New Stack - thenewstack.io

Growth in cloud and e-commerce drives Alibaba quarter revenue up 10% – ZDNet

Image: Getty Images

Alibaba Group has reported its latest quarterly earnings for the period ended 31 December 2021, the first since the company unveiled its major restructuring plans at the start of December.

Under the restructure, the Chinese internet giant has split its global and domestic e-commerce business into two separate units. During the restructure announcement, Alibaba said it would focus its investment on the "two strategic pillars" to better drive synergies across its consumer and wholesale commerce platforms in China as well as internationally.

Starting this quarter, the company has updated its segment reporting to reflect this restructure, which now includes China commerce, international commerce, local consumer services, Cainaio, cloud, digital media and entertainment, and innovation initiatives and others.

Overall, for this quarter, the Chinese giant reported revenue came in at 242.5 million yuan ($38 million), an increase of 10% year-on-year, which the company attributed to growth experienced by its China commerce, cloud, local consumer services, and international commerce segments.

"We have always innovated and invested for the long term throughout Alibaba's history. As demonstrated by our new segmental disclosure, our continued investments in growth initiatives have seen tangible results," said outgoing Alibaba Group CFO Maggie Wu.

Alibaba's cloud revenue was $3 million, up 20% from a year ago. The company said this was underpinned by growth from financial and telecommunication industries but was offset by "a top cloud customer's decision to stop using our overseas cloud services for its international business due to non-product related requirements and slowing demand from customers in the Internet industry such as online entertainment and education".

China commerce, which continue to make up the largest share of total revenue, was 7% higher than last year reaching $27 million. The company's local consumer services segment experienced the largest increase in the quarter of 27% year-on-year to $1.9 million, and international commerce achieved $2.58 million, following an 18% jump on the prior corresponding period.

"Alibaba delivered steady progress this quarter as we continued to execute our multi-engine growth strategy in a complex and volatile market environment," Alibaba group chairman and CEO Daniel Zhang said.

"We achieved positive momentum in key strategic businesses through a disciplined focus on capacity building and value creation to fuel our future growth."

The company's global annual active consumers reached 1.28 billion for the 12 months by the end of the year, following an increase of approximately 43 million during the quarter, which included 979 million consumers in China and 301 million consumers overseas.

Excerpt from:
Growth in cloud and e-commerce drives Alibaba quarter revenue up 10% - ZDNet

WD My Cloud Home: My Support Nightmare – The Mac Observer

Way back in 2017, I wrote about My Cloud Home, a (then) new Network Attached Storage (NAS) device from Western Digital (WD).

Whats a NAS? Like Dropbox, Google Drive, iCloud Drive, and other cloud-based storage services, a NAS provides many of the same remote storage features with a delightful difference: no monthly charges.

I was impressed by the then-new My Cloud Home, which was reasonably-priced, easier-than-others to configure and use, and surprisingly full-featured for its price (starting at $159 for a 2TB model). I recommended it without hesitation and have used it for personal network storage ever since.

It performed well for the first couple of years before becoming less and less reliable. Its icon, which had appeared on my desktop for months at a time without intervention, began disappearing and requiring me to log in manually. It was annoying, but I was always able to log in and access my files. At least until recently.

Last week when I attempted to log in, instead of mounting its icon on my desktop, I received an error message: My Cloud is having trouble connecting to the server. Check your internet connection and try again.

My internet connection worked flawlessly for everything except my My Cloud Home, so I tried again. And again. When I was still unable to connect to my My Cloud Home after 24 hours, I contacted WD support and explained that my network-attached storage device is worthless if its unable to connect to their (Western Digitals) servers.

After four days without a response, I submitted a second support ticket and asked to escalate my issue. I received an automated reply that they would escalate my case and follow up as soon as possible.

Its been six days since I could access my files from the desktop, and Im still awaiting a response.

To be fair, while I cant log and mount an icon for my My Cloud Home device on the desktop, I can access my files via a web browser or IOS app. Its inconvenient, but I can see and download my files when necessary.

Still, I got the device because it behaves like a directly connected storage device on my Mac, displaying an icon on my desktop as though it were a USB or Thunderbolt drive. Managing its files through a web browser is awkward, to say the least.

My point is that after six days without a response from Western Digital support, I no longer recommend WD products. Waiting a week or more to hear from a support rep is unconscionable. Im not sure other storage vendors are better at support, but they couldnt be any worse.

Caveat Emptor.

Follow this link:
WD My Cloud Home: My Support Nightmare - The Mac Observer

Fujitsu is ending its mainframe and Unix services – TechRadar

Fujitsu has quietly revealed its plans to shutter both its mainframe and Unix server system business by the end of this decade.

In a notice posted to the Japanese IT giant's website, the company announced its plans to stop selling its mainframes and Unix server systems by 2030 though support systems will continue for an additional five years.

Fujitsu will stop manufacturing and selling its mainframe systems by 2030 as well as discontinue its Unix server systems by the end of 2029. As support services for both portfolios will extend for another five years, 2034 will mark the end of support for its Unix servers while 2035 will be the end of its mainframes.

In its notice, Fujitsu argues that everything in society will be connected by digital touchpoints in the near future which will require new, robust digital infrastructure. As such, businesses will need to reevaluate their existing core systems and embrace a fully digital, hybrid IT model to remain competitive and sustainable.

Fujitsu's plan also includes a timetable for shifting its mainframes and Unix servers to the cloud as part of a new business brand called Fujitsu Uvance.

Through this new brand, the company aims to provide businesses access to computing resources such as HPC using an as-a-service model that will give them access to advanced capabilities when needed.

While the move makes sense for the future of Fujitsu, the company's mainframe customers now have a deadline before which they will need to migrate their mainframe applications to another platform or rebuild them from scratch on modern infrastructure. However, mainframes are a long-term investment for organizations that often handle their most mission-critical applications.

On the Unix server side, customers have things a bit easier as their workloads can be transitioned to Linux without too much of a hassle.

We'll likely hear more from Fujitsu as the company begins winding down both its mainframe and Unix server businesses.

Via The Register

Visit link:
Fujitsu is ending its mainframe and Unix services - TechRadar

The North American PC And PC/ABS In IT Server Industry is Expected to Reach $81+ Million by 2028 – Yahoo Finance

Company Logo

North America PC And PC/ABS In IT Server Market

North America PC And PC/ABS In IT Server Market

Dublin, Feb. 23, 2022 (GLOBE NEWSWIRE) -- The "North America PC And PC/ABS In IT Server Market Size, Share & Trends Analysis Report Product By Application (Polycarbonate, Polycarbonate/Acrylonitrile Butadiene Styrene), By Country (U.S., Canada, Mexico), And Segment Forecasts, 2021 - 2028" report has been added to ResearchAndMarkets.com's offering.

The North America PC and PC/ABS in IT server market size is expected to reach USD 81.95 million by 2028, according to a new report by the publisher. The market is expected to expand at a CAGR of 4.3%, in terms of revenue, from 2021 to 2028. Rising demand for cloud storage and internet services from the growing population has increased the installation of data centers and servers, hence propelling the demand for Polycarbonate (PC) and Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS) across the IT server component manufacturers in the region.

Polycarbonate (PC) is a highly durable, flexible, and thermal resistant resin suitable for manufacturing twinwalls used for thermal separation across the aisle in the server room. In addition, data centers are a significant component of modern technology, especially technologies that rely on cloud computing. In addition, components such as housing for cooling systems require higher resistance towards heat, for which the aforementioned resins are highly suitable.

Major tech companies such as Google LLC; Amazon.com, Inc.; and HSN, Inc. utilize polycarbonate-based twinwall and multiwall for containment systems to provide reliable IT environments, allowing maximum cooling for servers and components along with physically separating a group of servers and batteries from each other. Furthermore, high conductivity characteristics of polycarbonate and polycarbonate/acrylonitrile butadiene styrene are suitable for the manufacturing of connectors used across the IT industry.

Server racks such as wall mount, floor-standing, open-frame, among others are designed for mounting, organizing, and securing equipment including servers, routers, hubs, switches, and audio/video components. In addition, they provide cable management and enable optimized airflow for increased operational efficiency and prolonged equipment life. Hence, polycarbonate and polycarbonate/acrylonitrile butadiene styrene are suitable for manufacturing server racks.

North America PC And PC/ABS In IT Server Market Report Highlights

Story continues

In terms of revenue, the twinwall sub-segment accounted for a prominent share in the market in 2020 across both polycarbonate (PC) as well as polycarbonate/acrylonitrile butadiene styrene (PC/ABS) segment due to its high utility for creating a physical barrier across the aisle to manage heat generated from the servers

In 2020, the U.S. accounted for the major market share across North America in terms of volume and is estimated to be more than 80.0%. This is due to the increasing demand for data centers and cloud storage from the rising population across the country

The presence of major tech giants including International Business Machines Corporation (IBM); Google LLC; Facebook, Inc.; and Microsoft Corporation have increased the demand for the servers and server components across the country. Therefore, propelling the demand for polycarbonate and polycarbonate/acrylonitrile butadiene styrene for manufacturing the servers and server components

Increasing demand for PC) and PC/ABS across the IT server industry has introduced plastic molding companies across the region. Companies such as Server Technology, Tenere Inc.; AIC Inc. IT Creations; Inc.; and Jameco Electronics are the prominent plastic injection molders providing services for the IT industry across North America

Key Topics Covered:

Chapter 1 North America PC and PC/ABS in IT Server Market: Product by Application Estimates & Trend Analysis1.1 North America PC and PC/ABS in IT Server Market: Product by Application movement analysis, 2020 & 20281.2 Polycarbonate1.2.1 North America PC and PC/ABS in IT Server market estimates and forecasts, By Polycarbonate, 2017 - 2028 (Tons) (USD Million)1.3 Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS)1.3.1 North America PC and PC/ABS in IT Server market estimates and forecasts, By Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)

Chapter 2 North America PC and PC/ABS in IT Server Market: Country Estimates & Trend Analysis2.1 North America PC and PC/ABS in IT Server Market: Country movement analysis, 2020 & 20282.2 U.S.2.2.1 U.S. PC and PC/ABS in IT Server market estimates and forecasts, 2017 - 2028 (Tons) (USD Million)2.2.2 U.S. PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate, 2017 - 2028 (Tons) (USD Million)2.2.3 U.S. PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)2.3 Canada2.3.1 Canada PC and PC/ABS in IT Server market estimates and forecasts, 2017 - 2028 (Tons) (USD Million)2.3.2 Canada PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate, 2017 - 2028 (Tons) (USD Million)2.3.3 Canada PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)2.4 Mexico2.4.1 Mexico PC and PC/ABS in IT Server market estimates and forecasts, 2017 - 2028 (Tons) (USD Million)2.4.2 Mexico PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate, 2017 - 2028 (Tons) (USD Million)2.4.3 Mexico PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)

Chapter 3 North America PC and PC/ABS in IT Server Market: Market Drivers3.1 Polycarbonate and PC/ABS Plastics are Utilized Vs. Other Thermoplastics3.2 Comparative Analysis of Plastic Properties

Chapter 4 North America PC and PC/ABS in IT Server Market: Price Trend Analysis4.1 North America PC and PC/ABS in IT Server Market: Polycarbonate (PC) Price Trend Analysis, 2017 - 2021 (USD/Kg)4.1.1 Polycarbonate Pricing Analysis, by Manufacturers, 2020 & 2021 (USD/Kg)4.2 North America PC and PC/ABS in IT Server Market: Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS) Price Trend Analysis, 2017 - 2021 (USD/Kg)

Chapter 5 North America PC and PC/ABS in IT Server Market: Competitive Landscape5.1 Vendor Landscape5.1.1 List of IT Server Manufacturers (Market Ranking Analysis)5.1.2 List of Plastic Injection Molders for IT Servers & IT Server component manufacturers5.1.3 List of PC and PC/AB Product Types offered by SABIC for IT Servers

For more information about this report visit https://www.researchandmarkets.com/r/kyiouz

Attachment

See the rest here:
The North American PC And PC/ABS In IT Server Industry is Expected to Reach $81+ Million by 2028 - Yahoo Finance

Backup heads to cloud as ransomware hits 76% and RTOs/RPOs fail – ComputerWeekly.com

In a disaster recovery scenario, most organisations cant recover the data they want or do it quickly enough.

Meanwhile, ransomware attacks now firmly among the list of potential disasters have been suffered by 76% of organisations in the past 12 months, with successful entries by malware mostly down to users clicking links and compromised admin systems.

Those are some of the findings of the Veeam Data protection trends report 2022, which questioned more than 3,000 IT decision-makers, mostly in organisations of more than 1,000 employees and in 28 countries.

Top-level findings in the survey included that the average outage lasts 78 minutes and the estimated average cost is $1,467 per minute or $88,000 per hour with 40% of servers suffering one unexpected outage a year.

A key finding was that recovery time objectives and recovery point objectives (RTOs and RPOs) are not being achieved. That is, there is an availability gap and a protection gap, according to the Veeam view of the survey results.

When asked whether their organisation can recover applications as quickly as service-level agreements (SLAs) demand and whether they can restore all data that SLAs specify, the answers were resoundingly that they couldnt.

Nine out of 10 (90%) said they could not recover data as quickly as they wanted, and 89% said they couldnt recover all they wanted.

Backup is increasingly making use of the cloud, and by 2024, 79% of organisations expect to use the cloud in some form for backup purposes. That is a projected increase from 67% this year.

Meanwhile, disaster recovery (DR) is also expected to undergo a big shift to use of the cloud, according to the survey. While 34% managed DR using their own datacentres in 2022, the expectation of respondents was that 53% would be done via the cloud and a disaster-recovery-as-a-service (DRaaS) provider by 2024, although 28% of data would still be held on customers own sites.

How to recover from a disaster varied, with most (61%) saying they would restore to on-premise sites, while 39% would recover to the cloud. A significant portion in both cases (40% and 20% of all who responded to these questions) said reconfiguring networking would be manual.

And with servers, 29% expect to manually reconfigure them, while 45% will use pre-written scripts, and 25% have orchestrated workflows.

But the datacentre is not dead that is the Veeam take on results that saw the proportion of virtual machines hosted in the cloud already close to half (47%), according to those questioned, with this expected to rise a little by 2024 (52%). But the datacentre will remain vital, with the remainder split equally between physical and virtual servers staying on-site.

Ransomware attacks have been suffered by 76% of those questioned. Only a quarter (24%) had suffered no ransomware attack in the past 12 months, but 23% had been the victim of two, while 19% suffered three and 16% only one.

When asked about causes in more detail, malicious links were the most common means of entry (25%), followed by compromised credentials such as logins, passwords and remote desktop protocol (RDP) vulnerabilities (23%). One-fifth (20%) of ransomware attacks gained entry via an infected patch or software update, while 17% came from spam email and 12% from an insider threat.

When it comes to recovering data from a ransomware attack, an average of 64% of data was restored. More than one-third (36%) of respondents got more than 80% back, while 19% restored between 61% and 80%, 20% restored between 41% and 60%, and 18% recovered between 21% and 40%.

Containers are a small but significant growth area as a means of running applications that is cloud-native and portable between locations. According to the Veeam survey, 56% of respondents use containers in production, and 35% plan to.

But the survey found an uneven set of responses to the question of who was responsible for container application data protection. Just under one-fifth (19%) said it is handled by the main backup team, while 21% said it is handled by those who manage Kubernetes. Meanwhile, 27% said backup was handed by the application owners and 28% by the team that manages storage for components used by Kubernetes.

According to Veeam, snapshots, taken throughout the working day, need to be used in conjunction with backups, usually run once a day usually. That is because, according to the survey results, there is not much difference between the importance of high priority data, of which 55% has a downtime tolerance of one hour, and normal data, of which 49% can stand the same delay.

But according to the survey, most respondents do that anyway. Nearly one in five (19%) protect high-priority applications constantly, while 13% do the same for normal applications. Both are protected every 15 minutes by 17% and every hour by 19%. About one-fifth (18% high priority, 20% normal) protect data no less frequently than every two hours. Those figures become 14% and 16% for between two and four hours. Every four to six hours is 7% and 8%, six to 12 is 3% and 12 to 24 hours is 2% and 4%.

The survey also asked about digital transformation projects, and found the pandemic had tended to speed things up, often by accelerating already-planned modernisation. Most (73%) said they had speeded up digital transformation initiatives, while 18% said they were unaffected, and 9% said things had slowed.

That said, there are obstacles. The most commonly cited are lack of skills (54%), dependence on legacy systems (53%), focus on maintaining operations due to the pandemic (51%), lack of management buy-in (43%), lack of time (39%) and lack of money (35%). Only 8% said nothing stood in the way of their digital transformation initiatives.

Read more here:
Backup heads to cloud as ransomware hits 76% and RTOs/RPOs fail - ComputerWeekly.com